Rebooting the UK’s nuclear new build programme

80% of our energy comes from burning fossil fuels, and that needs to change, fast. By the middle of this century we need to be approaching net zero carbon emissions, if the risk of major disruption from climate change is to be lowered – and the middle of this century is not very far away, when measured in terms of the lifetime of our energy infrastructure.

My last post – If new nuclear doesn’t get built, it will be fossil fuels, not renewables, that fill the gap – tried to quantify the scale of the problem – all our impressive recent progress in implementing wind and solar energy will be wiped out by the loss of 60 TWh/ year of low-carbon energy that will happen over the next decade as the UK’s fleet of Advanced Gas Cooled Reactors are retired, and even with the most optimistic projections for the growth of wind and solar, without new nuclear build the prospect of decarbonising our electricity supply remains distant. And, above all, we always need to remember that the biggest part of our energy consumption comes from directly burning oil and gas – for transport, industry and domestic heating – and this needs to be replaced by more low carbon electricity. We need more nuclear energy.

The UK’s current nuclear new build plans are in deep trouble

All but one of our existing nuclear power stations will be shut down by 2030 – only the Pressurised Water Reactor at Sizewell B, rated at 1.2 GW will remain. So, without any new nuclear power stations opening, around 60 TWh a year of low carbon energy will be lost. What is the current status of our nuclear new build program? Here’s where we are now:

  • Hinkley point C – 3.2 GW capacity, consisting of 2 Areva EPR units, is currently under construction, with the first unit due to be completed by the end of 2025
  • Sizewell C – 3.2 GW capacity, consisting of 2 Areva EPR units, would be a duplicate of Hinkley C. The design is approved, but the project awaits site approval and an investment decision.
  • Bradwell B – 2-3 GW capacity. As part of the deal for Chinese support for Hinkley C, it was agreed that the Chinese state nuclear corporation CGN would install 2 (or possibly 3) Chinese designed pressurised water reactors, the CGN HPR1000. Generic Design Assessment of the reactor type is currently in progress, site approval and final investment decision needed
  • Wylfa – 2.6 GW,2 x 1.3 GW Hitachi ABWR. Generic Design Assessment has been completed, but the project has been suspended by the key investors, Hitachi.
  • Oldbury – 2.6 GW,2 x 1.3 GW Hitachi ABWR. A duplicate of Wylfa, project suspended.
  • Moorside, Cumbria, 3.4 GW, 3 x 1.1 Westinghouse AP1000, GDA completed, but the project has been suspended by its key investors, Toshiba.
  • So this leaves us with three scenarios for the post-2030 period.

    We can, I think, assume that Hinkley C is definitely happening – if that is the limit of our expansion of nuclear power, we’ll end up with about 24 TWh a year of low carbon electricity from nuclear, less than half the current amount.

    With Sizewell C and Bradwell B, which are currently proceeding, though not yet finalised, we’ll have 78 TWh a year – this essentially replaces the lost capacity from our AGR fleet, with a small additional margin.

    Only with the currently suspended projects – at Wylfa, Oldbury, and Moorside, would we be substantially increasing nuclear’s contribution to low carbon electricity, roughly doubling the current contribution at 143 TWh per year.

    Transforming the economics of nuclear power

    Why is nuclear power so expensive – and how can it be made cheaper? What’s important to understand about nuclear power is that its costs are dominated by the upfront capital cost of building a nuclear power plant, together with the provision that has to be made for safely decommissioning the plant at the end of its life. The actual cost of running it – including the cost of the nuclear fuel – is, by comparison, quite small.

    Let’s illustrate this with some rough indicative figures. The capital cost of Hinkley C is about £20 billion, and the cost of decommissioning it at the end of its 60 year expected lifespan is £8 billion. For the investors to receive a guaranteed return of 9%, the plant has to generate a cashflow of £1.8 billion a year to cover the cost of capital. If the plant is able to operate at 90% capacity, this amounts to about £72 a MWh of electricity produced. If one adds on the recurrent costs – for operation and maintenance, and the fuel cycle – of about £20 a MWh, this gets one to the so-called “strike price” – which in the terms of the deal with the UK government the project has been guaranteed – of £92 a MWh.

    Two things come out from this calculation – firstly, this cost of electricity is substantially more expensive than the current wholesale price (about £62 per MWh, averaged over the last year). Secondly, nearly 80% of the price covers the cost of borrowing the capital – and 9% seems like quite a high rate at a time of historically low long-term interest rates.

    EDF itself can borrow money on the bond market for 5%. At 5%, the cost of financing the capital comes to about £1.1 billion a year, which would be achieved at an electricity price of a bit more than £60 a MWh. Why the difference? In effect, the project’s investors – the French state owned company EDF, with a 2/3 stake, the rest being held by the Chinese state owned company CGN – receive about £700 million a year to compensate them for the risks of the project.

    Of course, the UK state itself could have borrowed the money to finance the project. Currently, the UK government can borrow at 1.75% fixed for 30 years. At 2%, the financing costs would come down from £1.8 billion a year to £0.7 billion a year, requiring a break-even electricity price of less than £50 a MWh. Of course, this requires the UK government to bear all the risk for the project, and this comes at a price. It’s difficult to imagine that that price is more than £1 billion a year, though.

    If part of the problem of the high cost of nuclear energy comes from the high cost of capital baked into the sub-optimal way the Hinkley Point deal has been structured, it remains the case that the capital cost of the plant in the first place seems very high. The £20 billion cost of Hinkley Point is indeed high, both in comparison to the cost of previous generations of nuclear power stations, and in comparison with comparable nuclear power stations built recently elsewhere in the world.

    Sizewell B cost £2 billion at 1987 prices for 1.2 GW of capacity – scaling that up to 3.2 GW and putting it in current money suggests that Hinkley C should cost about £12 billion.

    Some of the additional cost can undoubtedly be ascribed to the new safety features added to the EPR. The EPR is an evolution of the original pressurised water reactor design; all pressurised water reactors – indeed all light water reactors (which use ordinary, non-deuterated, water as both moderator and coolant) – are susceptible to “loss of coolant accidents”. In one of these, if the circulating water is lost, even though the nuclear reaction can be reliably shut down, the residual heat from the radioactive material in the core can be great enough to melt the reactor core, and to lead to steam reacting with metals to create explosive hydrogen.

    The experience of loss of coolant accidents at Three Mile Island and (more seriously) Fukushima has prompted new so-called generation III or gen III+ reactors to incorporate a variety of new features to mitigate potential loss-of-coolant accidents, including methods for passive backup cooling systems and more layers of containment. The experience of 9/11 has also prompted designs to consider the effect of a deliberate aircraft crash into the building. All these extra measures cost money.

    But even nuclear power plants of the same design cost significantly more to build in Europe and the USA than they do in China or Korea – more than twice as much, in fact. Part of this is undoubtedly due to higher labour costs (including both construction workers and engineers and other professionals). But there are factors leading to these other countries’ lower costs that can be emulated in the UK – they arise from the fact that both China and Korea have systematically got better at building reactors by building a sequence of them, and capturing the lessons learnt from successive builds.

    In the UK, by contrast, no nuclear power station has been built since 1995, so in terms of experience we’re starting from scratch. And our programme of nuclear new build could hardly have been designed in a way that made it more difficult to capture these benefits of learning, with four quite different designs being built by four different sets of contractors.

    We can learn the lessons of previous experiences of nuclear builds. The previous EPR installations in Olkiluoto, Finland, and Flamanville, France – both of which have ended up hugely over-budget and late – indicate what mistakes we should avoid, while the Korean programme – which is to-date the only significant nuclear build-out to significantly reduce capital costs over the course of the programme – offers some more positive lessons. To summarise –

  • The design needs to be finalised before building work begins – late changes impose long delays and extra costs;
  • Multiple units should be installed on the same site;
  • A significant effort to develop proven and reliable supply chains and a skilled workforce pays big dividends;
  • Poor quality control and inadequate supervision of sub-contractors leads to long delays and huge extra costs;
  • A successful national nuclear programme makes sequential installation of identical designs on different sites, retaining the learning and skills of the construction teams;
  • Modular construction and manufacturing techniques should be used as much as possible.
  • The last point supports the more radical idea of making the entire reactor in a factory rather than on-site. This has the advantage of ensuring that all the benefits of learning-by-doing are fully captured, and allows much closer control over quality, while making easier the kind of process innovation that can make significant reductions in manufacturing cost.

    The downside is that this kind of modular manufacturing is only possible for reactors on a considerably smaller scale than the >1 GW capacity units that conventional programmes install – these “Small Modular Reactors” – SMRs – will be in the range of 10’s to 100’s MW. The driving force for increasing the scale of reactor units has been to capture economies of scale in running costs and fuel efficiencies. SMRs will sacrifice some of these economies of scale, with the promise of compensating economies of learning that will drive down capital costs enough to compensate. Given that, for current large scale designs, the total cost of electricity is dominated by the cost of capital, this is an argument that is at least plausible.

    What the UK should do to reboot its nuclear new build programme

    If the UK is to stand any chance at all of reducing its net carbon emissions close to zero by the middle of the century, it needs both to accelerate offshore wind and solar, and get its nuclear new build programme back on track.

    It was always a very bad idea to try and implement a nuclear new build programme with more than one reactor type. Now that the Hinkley Point C project is underway, our choice of large reactor design has in effect been made – it is the Areva EPR.

    The EPR is undoubtedly a complex and expensive design, but I don’t think there is any evidence that it is fundamentally different in this from other Gen III+ designs. Recent experience of building the rival Westinghouse AP1000 design in the USA doesn’t seem to be any more encouraging. On the other hand, the suggestion of some critics that the EPR is fundamentally “unbuildable” has clearly been falsified by the successful completion of an EPR unit in Taishan, China – this was connected to the grid in December last year. The successful building of both EPRs and AP1000s in China suggest, rather, that the difficulties seen in Europe and the USA arise from systematic problems of the kind discussed in the last section rather than a fundamental flaw in any particular reactor design.

    The UK should therefore do everything to accelerate the Sizewell C project, where two more EPRs are scheduled to be built. This needs to happen on a timescale that ensure that there is continuity between the construction of Hinkley C and Sizewell C, to retain the skills and supply chains that are developed and to make sure all the lessons learnt in the Hinkley build are acted on. And it should be financed in a way that’s less insanely expensive than the arrangements for Hinkley Point C, accepting the inevitability that the UK government will need to take a considerable stake in the project.

    In an ideal world, every other large nuclear reactor built in the UK in the current programme should also be an EPR. But a previous government apparently made a commitment to the Chinese state-owned enterprise CGN that, in return for taking a financial stake in the Hinkley project, it should be allowed to build a nuclear power station at Bradwell, in Essex, using the Chinese CGN HPR1000 design. I think it was a bad idea on principle to allow a foreign government to have such close control of critical national infrastructure, but if this decision has to stand, one can find silver linings. We should respect and learn from the real achievements of the Chinese in developing their own civil nuclear programme. If the primary motivation of CGN in wanting to build an HPR1000 is to improve its export potential by demonstrating its compliance with the UK’s independent and rigorous nuclear regulations, then that goal should be supported.

    We should speed up replacement plans to develop the other three sites – Wylfa, Oldbury and Moorside. The Wylfa project was the furthest advanced, and a replacement scheme based on installing two further EPR units there should be put together to begin shortly after the Sizewell C project, designed explicitly to drive further savings in capital costs by maximising learning by doing.

    The EPR is not a perfect technology, but we can’t afford to wait for a better one – the urgency of climate change means that we have to start building right now. But that doesn’t mean we should accept that no further technological progress is possible. We have to be clear about the timescales, though. We need a technology that is capable of deployment right now – and for all the reasons given above, that should be the EPR – but we need to be pursuing future technologies both at the demonstration stage, and at the earlier stages of research and development. Technologies ready for demonstration now might be deployed in the 2030’s, while anything that’s still in the R&D stage now realistically is not likely to be ready to be deployed until 2040 or so.

    The key candidate for a demonstration technology is a light water small modular reactor. The UK government has been toying with the idea of small modular reactors since 2015, and now a consortium led by Rolls-Royce has developed a design for a modular 400 MW pressurised water reactor, with an ambition to enter the generic design approval process in 2019 and to complete a first of a kind installation by 2030.

    As I discussed above, I think the arguments for small modular are at the very least plausible, but we won’t know for sure how the economics work out until we try to build one. Here the government needs to play the important role of being a lead customer and commission an experimental installation (perhaps at Moorside?).

    The first light water power reactors came into operation in 1960 and current designs are direct descendents of these early precursors; light water reactors have a number of sub-optimal features that are inherent to the basic design, so this is an instructive example of technological lock-in keeping us on a less-than-ideal technological trajectory.

    There are plenty of ideas for fission reactors that operate on different principles – high temperature gas cooled reactors, liquid salt cooled reactors, molten salt fuelled reactors, sodium fast reactors, to give just a few examples. These concepts have many potential advantages over the dominant light water reactor paradigm. Some should be intrinsically safer than light water reactors, relying less on active safety systems and more on an intrinsically fail-safe design. Many promise better nuclear fuel economy, including the possibility of breeding fissile fuel from non-fissile elements such as thorium. Most would operate at higher temperatures, allowing higher conversion efficiencies and the possibility of using the heat directly to drive industrial processes such as the production of hydrogen.

    But these concepts are as yet undeveloped, and it will produce many years and much money to convert them into working demonstrators. What should the UK’s role in this R&D effort be? I think we need to accept the fact that our nuclear fission R&D effort has been so far run down that it is not realistic to imagine that the UK can operate independently – instead we should contribute to international collaborations. How best to do that is a big subject beyond the scope of this post.

    There are no easy options left

    Climate change is an emergency, yet I don’t think enough people understand how difficult the necessary response – deep decarbonisation of our energy systems – will be. The UK has achieved some success in lowering the carbon intensity of its economy. Part of this has come from, in effect, offshoring our heavy industry. More real gains have come from switching electricity generation from coal to gas, while renewables – particularly offshore wind and solar – have seen impressive growth.

    But this has been the easy part. The transition from coal to gas is almost complete, and the ambitious planned build-out of offshore wind to 2030 will have occupied a significant fraction of the available shallow water sites. Completing the decarbonisation of our electricity sector without nuclear new build will be very difficult – but even if that is achieved, that doesn’t even bring us halfway to the goal of decarbonising our energy economy. 60% of our current energy consumption comes from directly burning oil – for cars and trucks – and gas – for industry and heating our homes – much of this will need to be replaced by low-carbon energy, meaning that our electricity sector will have to be substantially increased.

    Other alternative low carbon energy sources are unpalatable or unproven. Carbon capture and storage has never yet deployed at scale, and represents a pure overhead on existing power generation technologies, needing both a major new infrastructure to be built and increased running costs. Scenarios that keep global warming below 2° C need so called “negative emissions technologies” – which don’t yet exist, and make no economic sense without a degree of worldwide cooperation which seems difficult to imagine at the moment.

    I understand why people are opposed to nuclear power – civil nuclear power has a troubled history, which reflect its roots in the military technologies of nuclear weapons, as I’ve discussed before. But time is running out, and the necessary transition to a zero carbon energy economy leaves us with no easy options. We must accelerate the deployment of renewable energies like wind and solar, but at the same time move beyond nuclear’s troubled history and reboot our nuclear new build programme.

    Notes on sources
    For an excellent overall summary of the mess that is the UK’s current new build programme, see this piece by energy economist Dieter Helm. For the specific shortcomings of the Hinkley Point C deal, see this National Audit Office report (and at the risk of saying, I told you so, this is what I wrote 5 years ago: The UK’s nuclear new build: too expensive, too late). For the lessons to be learnt from previous nuclear programmes, see Nuclear Lessons Learnt, from the Royal Academy of Engineering. This MIT report – The Future of Nuclear in a carbon constrained world – has much useful to say about the economics of nuclear power now and about the prospects for new reactor types. For the need for negative emissions technologies in scenarios that keep global warming below 2° C, see Gasser et al.

    If new nuclear doesn’t get built, it will be fossil fuels, not renewables, that fill the gap

    The UK’s programme to build a new generation of nuclear power stations is in deep trouble. Last month, Hitachi announced that it is pulling out of a project to build two new nuclear power stations in the UK; Toshiba had already announced last year that it was pulling out of the Moorside project.

    The reaction to this news has been largely one of indifference. In one sense this is understandable – my own view is that it represents the inevitable unravelling of an approach to nuclear new build that was monumentally misconceived in the first place, maximising costs to the energy consumer while minimising benefits to UK industry. But many commentators have taken the news to indicate that nuclear power is no longer needed at all, and that we can achieve our goal of decarbonising our energy economy entirely on the basis of renewables like wind and solar. I think this argument is wrong. We should accelerate the deployment of wind and solar, but this is not enough for the scale of the task we face. The brutal fact is that if we don’t deploy new nuclear, it won’t be renewables that fill the gap, but more fossil fuels.

    Let’s recall how much energy the UK actually uses, and where it comes from. In 2017, we used just over 2200 TWh. The majority of the energy we use – 1325 TWh – is in the form of directly burnt oil and gas. 730 TWh of energy inputs went in to produce the 350 TWh of electricity we used. Of that 350 TWh, 70 TWh came from nuclear, 61.5 TWh came from wind and solar, and another 6 TWh from hydroelectricity. Right now, our biggest source of low carbon electricity is nuclear energy.

    But most of that nuclear power currently comes from the ageing fleet of Advanced Gas Cooled reactors. By 2030, all of our AGRs will be retired, leaving only Sizewell B’s 1.2 GW of capacity. In 2017, the AGRs generated a bit more than 60 TWh – by coincidence, almost exactly the same amount of electricity as the total from wind and solar.

    The growth in wind and solar power in the UK in recent years has been tremendous – but there are two things we need to stress. Firstly, taking out the existing nuclear AGR fleet – as has to happen over the next decade – would entirely undo this progress, without nuclear new build. Secondly, in the context of the overall scale of the challenge of decarbonisation, the contribution of both nuclear and renewables to our total energy consumption remains small – currently less than 16%.

    One very common response to this issue is to point out that the cost of renewables has now fallen so far that at the margin, it’s cheaper to bring new renewable capacity online than to build new nuclear. But this argument from marginal cost is only valid if you are only interested in marginal changes. If we’re happy with continuing to get around 80% of our energy from fossil fuels, then the marginal cost argument makes sense. But if we’re serious about making real progress towards decarbonisation – and I think the urgency of the climate change issue and the scale of the downside risks means we should be – then what’s important isn’t the marginal cost of low-carbon energy, but the whole system cost of replacing, not a few percent, but close to 100% of our current fossil fuel use.

    So how much more wind and solar energy capacity can we realistically expect to be able to build? The obvious point here is that the total amount is limited – the UK is a small, densely populated, and not very sunny island – even in the absence of economic constraints, there are limits to how much of it can be covered in solar cells. And although its position on the fringes of the Atlantic makes it a very favourable location for offshore wind, there are not unlimited areas of the relatively shallow water that current offshore wind technology needs.

    Currently, the current portfolio of offshore wind projects amounts to a capacity of 33.2 GW, with one further round of 7 GW planned. According to the most recent information I can find, “Industry says it could deliver 30GW installed by 2030”. If we assume the industry does a bit better than this, and delivers the entire current portfolio, that would produce about 120 TWh a year.

    Solar energy produced 11.5 TWh in 2017. The very fast rate of growth that led us to that point has levelled off, due to changes in the subsidy regime. Nonetheless, there’s clearly room for further expansion, both of rooftop solar and grid scale installations. The most aggressive of the National Grid scenarios envisages a tripling of solar by 2030, to 32 TWh.

    Thus by 2030, in the best case for renewables, wind and solar produce about 150 TWh of electricity, compared to our current total demand for electricity of 350 TWh. We can reasonably expect demand for electricity, all else equal, to slowly decrease as a result of efficiency measures. Estimating this by the long term rate of reduction of energy demand of 2% a year, we might hope to drive demand down to around 270 TWh by 2030. Where does that leave us? With all the new renewables, together with nuclear generation at its current level, we’d be generating 220 TWh out of 270 TWh. Adding on some biomass generation (currently about 35 TWh, much of which comes from burning environmentally dubious imported wood-chips), 6 TWh of hydroelectricity and some imported French nuclear power, and the job of decarbonising our electricity supply is nearly done. What would we do without the 70 TWh of nuclear power? We’d have to keep our gas-fired power stations running.

    But, but, but… most of the energy we use isn’t in the form of electricity – it’s directly burnt gas and oil. So if we are serious about decarbonising the whole energy system, we need to be reducing that massive 1325 TWh of direct fossil fuel consumption. The most obvious way of doing that is by shifting from directly burning oil to using low-carbon electricity. This means that to get anywhere close to deep decarbonisation we are going to need to increase our consumption of electricity substantially – and then increase our capacity for low-carbon generation to match.

    This is one driving force for the policy imperative to move away from internal combustion engines to electric vehicles. Despite the rapid growth of electric vehicles, we still use less than 0.2 TWh charging our electric cars. This compares with a total of 4.8 TWh of electricity used for transport, mostly for trains (at this point we should stop and note that we really should electrify all our mainline and suburban train-lines). But these energy totals are dwarfed by the 830 TWh of oil we burn in cars and trucks.

    How rapidly can we expect to electrify vehicle transport? This is limited by economics, by the world capacity to produce batteries, by the relatively long lifetime of our vehicle stock, and by the difficulty of electrifying heavy goods vehicles. The most aggressive scenario looked at by the National Grid suggests electric vehicles consuming 20 TWh by 2030, a more than one-hundred-fold increase on today’s figures, representing 44% a year growth compounded. Roughly speaking, 1 TWh of electricity used in an electric vehicle displaces 3.25 TWh of oil – electric motors are much more efficient at energy conversion than internal combustion engines. So even at this aggressive growth rate, electric vehicles will only have displaced 8% of the oil burnt for transport. Full electrification of transport would require more than 250 TWh of new electricity generation, unless we are able to generate substantial new efficiencies.

    Last, but not least, what of the 495 TWh of gas we burn directly, to heat our homes and hot water, and to drive industrial processes? A serious programme of home energy efficiency could make some inroads into this, we could make more use of ground source heat pumps, and we could displace some with hydrogen, generated from renewable electricity (which would help overcome the intermittency problem) or (in the future, perhaps) process heat from high temperature nuclear power stations. In any case, if we do decarbonise the domestic and industrial sectors currently dominated by natural gas, several hundred more TWh of electricity will be required.

    So achieve the deep decarbonisation we need by 2050, electricity generation will need to be more than doubled. Where could that come from? A further doubling of solar energy from our already optimistic 2030 estimate might take that to 60 TWh. Beyond that, for renewables to make deep inroads we need new technologies. Marine technologies – wave and tide – have potential, but in terms of possible capacity deep offshore wind perhaps offers the biggest prize, with the Scottish Government estimating possible capacities up to 100 GW. But this is a new and untried technology, which will certainly be very much more expensive than current offshore wind. The problem of intermittency also substantially increases the effective cost of renewables at high penetrations, because of the need for large scale energy storage and redundancy. I find it difficult to see how the UK could achieve deep decarbonisation without a further expansion of nuclear power.

    Coming back to the near future – keeping decarbonisation on track up to 2030 – we need to bring at least enough new nuclear on stream to replace the lost generation capacity of the AGR fleet, and preferably more, while at the same time accelerating the deployment of renewables. We need to be honest with ourselves about how little of our energy currently comes from low-carbon sources; even with the progress that’s been made deploying renewable electricity, most of our energy still arises from directly burning oil and gas. If we’re serious about decarbonisation, we need the rapid deployment of all low carbon energy sources.

    And yet, our current policy for nuclear power is demonstrably failing. How should we do things differently, more quickly and at lower cost, to reboot the UK’s nuclear new build programme? That will be the subject of another post.

    Notes on sources.
    Current UK energy statistics are from the 2018 edition of the Digest of UK Energy Statistics.
    Status of current and planned offshore wind capacity, from Crown Estates consultation.
    National Grid future energy scenarios.
    Oil displaced by electric vehicles – current estimates based on worldwide data, as reported by Bloomberg New Energy Finance.

    How inevitable was the decline of the UK’s Engineering industry?

    My last post identified manufacturing as being one of three sectors in the UK which combined material scale relative to the overall size of the economy with a long term record of improving total factor productivity. Yet, as us widely known, manufacturing’s share of the economy has been in long term decline, from 27% in 1970 to 10.6% in 2014. Manufacturing’s share of employment has fallen even further, as a consequence of its above-average rate of improvement in labour productivity. This fall in importance of manufacturing has been a common feature of all developed economies, yet the UK has seen the steepest decline.

    This prompts two questions – was this decline inevitable, and does it matter? A recent book by industry veteran Tom Brown – Tragedy and Challenge: an inside view of UK Engineering’s Decline and the Challenge of the Brexit Economy, makes a strong argument that this decline wasn’t inevitable, and that it does matter. It’s a challenge to conventional wisdom, but one that’s rooted in deep experience. Brown is hardly the first to identify as the culprits the banks, fund managers, and private equity houses collectively described as “the City” – but his detailed, textured description of the ways in which these institutions have exerted their malign influence makes a compelling charge sheet against the UK economy’s excessive financialisation.

    Brown’s focus is not on the highest performing parts of manufacturing – chemicals, pharmaceuticals and aerospace – but on what he describes as the backbone of the manufacturing sector – medium technology engineering companies, usually operating business-to-business, selling the components of finished products in highly competitive, international supply chains. The book is a combination of autobiography, analysis and polemic. The focus of the book reflects Brown’s own experience managing engineering firms in the UK and Europe, and it’s his own personal reflections that provide a convincing foundation for his wider conclusions.

    His analysis rehearses the decline of the UK’s engineering sector, pointing to the wider undesirable consequences of this decline, both at the macro level, in terms of the UK’s overall declining productivity growth and its worsening balance of payments position, and at the micro level. He is particularly concerned by the role of the decline of manufacturing in hollowing out the mid-level of the jobs market, and exacerbating the UK’s regional inequality. He talks about the development of a “caste system of the southern Brahmins, who can’t be expected to leave the oxygen of London, and the northern Untouchables who should consider themselves lucky just to have a job”.

    This leads on to his polemic – that the decline of the UK’s engineering firms was not inevitable, and that its consequences have been regrettable, severe, and will be difficult to reverse.

    Brown is not blind to the industry’s own failings. Far from it – the autobiographical sections make clear what he saw was wrong with the UK’s engineering industry at the beginning of his career. The quality of management was terrible and industrial relations were dreadful; he’s clear that, in the 1970’s, the unions hastened the industry’s decline. But you get the strong impression that he believes management and unions at the time deserved each other, and a chronic lack of investment in new plant and machinery, and a complete failure to develop the workforce led to a severe loss of competitiveness.

    The union problem ended with Thatcher, but the decline continued and accelerated. Like many others, Brown draws an unfavourable comparison between the German and British traditions of engineering management. We hear a lot about the Mittelstand, but it’s really helpful to see in practise what the cultural and practical differences are. For example, Brown writes “German managers tend to be concerned about their people, and far slower to lay off in a downturn. Their training of both management and shop-floor employees is vastly better than the UK… in contrast many UK employees have expected skilled people to be available on demand, and if they fired them then they could rehire at will like the gaffer in the old ship yards”.

    For Brown, its no longer the unions that are the problem – it’s the City. It’s fair to say that he takes a dim view of the elevated position of the Financial Services sector since the Big Bang – “Overall the City is a major source of problems – to UK engineering, and to society as a whole. Much that has happened there is crazy, and still is. Many of our brightest and best have been sucked in and become personally corrupted.”

    But where his book adds real value is in going beyond the rhetoric to fill out the precise details of exactly how the City serves engineering firms so badly. To Brown, the fund managers and private equity houses that exert control over firms dictate strategies to the firms that are usually pretty much the opposite of what would be required for them to achieve long-term growth. Investment in new plant and equipment is starved due to an emphasis on short-term results, and firms are forced into futile mergers and acquisitions activity, which generate big fees for the advisors but are almost always counterproductive for the long-term sustainability of the firms, because they force them away from developing long-term, focused strategies. These criticisms echo many made by John Kay in his 2012 report, which Brown cites with approval, combined with disappointment that so few of the recommendations have been implemented.

    “I do not suffer fools gladly”, says Brown, a comment which sets the tone for his discussion of the fund management industry. While he excoriates fund managers for their lack of diligence and technical expertise, he condemns the lending banks for outright unethical and predatory behaviour, deliberately driving distressed companies into receivership, all the time collecting fees for themselves and their favoured partners, while stiffing the suppliers and trade creditors. The well-publicised malpractice of RBS’s “Global Restructuring Group” offers just one example.

    One very helpful section of the book discusses the way Private Equity operates. Brown makes the very important point that not enough people understand the difference between Venture Capital and Private Equity. The former, Brown believes, represents technically sophisticated investors creating genuine new value –
    “investing real equity, taking real risks, and creating value, not just transferring it”.

    But what too many politicians, and too much of the press fail to realise is that genuine Venture Capital in the UK is a very small sector – in 2014, only £0.3 billion out of a total £4.3 billion invested by BVCA members fell into this category. Most of the investment is Private Equity, in which the investments are in existing assets.

    “The PE houses’ basic model is to buy companies as cheaply as possible, seek to “enhance” them, and then sell them for as much as possible in only three years’ time, so it is extremely short-termist. They “invest” money in buying the shares of these companies from the previous owners, but they invest as little as possible into the actual companies themselves; this crucial distinction is often completely misunderstood by the government and the media who applied the PE houses for the billions they are “investing” in British industry… in fact, much more cash is often extracted from these companies in dividends than is ever invested in them”.

    To Brown, much Private Equity is simply a vehicle for large scale tax avoidance, through eliding the distinction between debt and equity in “complex structures that just adhere to the letter of the law”. These complex structures of ownership and control lead to a misalignment of risk and reward – when their investments fail, as they often do, the PE houses get some of their investment back as it is secured debt, while trade suppliers, employees and the taxpayer get stiffed.

    To be more positive, what does Brown regard as the ingredients for success for an engineering firm? His list includes:

  • an international outlook, stressing the importance of being in the most competitive markets to understand your customers and the directions of the wider industry;
  • a long-term vision for growth, stressing innovation, R&D, and investment in latest equipment;
  • conservative finance, keeping strong balance sheet to avoid being knocked off course by the inevitable ups and downs of the markets, allowing the firm to keep control of its own destiny;
  • a focus on the quality of people – with managements who understand engineering and are not just from a financial background, and excellent training for the shop floor workers.
  • The book focuses on manufacturing and engineering, but I suspect many of its lessons have a much wider applicability. People interested in economic growth and industrial strategy necessarily, and rightly, focus on statistics, but this book offers an invaluable additional dimension of ground truth to these discussions.

    What drives productivity growth in the UK economy?

    How do you get economic growth? Economists have a simple answer – you can put in more labour, by having more people working for longer hours, or you can put in more capital, building more factories or buying more machines, or – and here things get a little more sketchy – you can find ways of innovating, of getting more outputs out of the same inputs. In the framework economists have developed for thinking about economic growth, the latter is called “total factor productivity”, and it is loosely equated with technological progress, taking this in its broadest sense. In the long run it is technological progress that drives improved living standards. Although we may not have a great theoretical handle on where total factor productivity comes from, its empirical study should tell us something important about the sources of our productivity growth. Or, in our current position of stagnation, why productivity growth has slowed down so much.

    Of course, the economy is not a uniform thing – some parts of it may be showing very fast technological progress, like the IT industry, while other parts – running restaurants, for example, might show very little real change over the decades. These differences emerge from the sector based statistics that have been collected and analysed for the EU countries by the EU KLEMS Growth and Productivity Accounts database.

    Sector percentage of 2015 economy by GVA contribution versus aggregate total factor productivity growth from 1998 to 2015. Data from EU KLEMS Growth and Productivity Accounts database.

    Here’s a very simple visualisation of some key results of that data set for the UK. For each sector, the relative importance of the sector to the economy as a whole is plotted on the x-axis, expressed as a percentage of the gross value added of the whole economy. On the y-axis is plotted the total change in total factor productivity over the whole 17 year period covered by the data. This, then, is the factor by which that sector has produced more output than would be expected on the basis of additional labour and capital. This may tell us something about the relative effectiveness of technological progress in driving productivity growth in each of these sectors.

    Broadly, one can read this graph as follows: the further right a sector is, the more important it is as a proportion of the whole economy, while the nearer the top a sector is, the more dynamic its performance has been over the 17 years covered by the data. Before a more detailed discussion, we should bear in mind some caveats. What goes into these numbers are the same ingredients as go into the measurement of GDP as a whole, so all the shortcomings of that statistic are potentially issues here.

    A great starting point for understanding these issues is Diane Coyle’s book GDP: a brief but affectional history. The first set of issues concern what GDP measures and what it doesn’t measure. Lots of kinds of activity are important for the economy, but they only tend to count in GDP if money changes hands. New technology can shift these balances – if supermarkets replace humans at the checkouts by machines, the groceries still have to be scanned, but now the customer is doing the work for nothing.

    Then there are some quite technical issues about how the measurements are done. This includes properly accounting for improvements in quality where technology is advancing very quickly; failing to fully account for the increased information transferred through a typical internet connection will mean that overall inflation will be overestimated, and productivity gains in the ICT will be understated (see e.g. A Comparison of Approaches to Deflating Telecoms Services Output, PDF). For some of the more abstract transactions in the modern economy – particularly in the banking and financial services sector, some big assumptions have to be made about where and how much value is added. For example, the method used to estimate the contribution of financial services – FISIM, for “Financial intermediation services indirectly measured” – has probably materially overstated the contribution of financial services to GDP by not handling risk correctly, as argued in this recent ONS article.

    Finally, there’s the big question of whether increases in GDP correspond to increases in welfare. The general answer to this question is, obviously, not necessarily. Unlike some commentators, I don’t take this to mean that we shouldn’t take any notice of GDP – it is an important indicator of the health of an economy and its potential to supply people’s needs. But it does need looking at critically. A glazing company that spent its nights breaking shop windows and its days mending them would be increasing GDP, but not doing much for welfare – this is a ridiculous example, but there’s a continuum between what economist William Baumol called unproductive entrepreneurship, the more extractive varieties of capitalism documented by Acemoglu and Robinson – and outright organised crime.

    To return to our plot, we might focus first on three dynamic sectors – information and communications, manufacturing, and professional, scientific, technical and admin services. Between them, these sectors account for a bit more than a quarter of the economy, and have shown significant improvements in total factor productivity over the period. In this sense it’s been ICT, manufacturing and knowledge-based services that have driven the UK economy over this period.

    Next we have a massive sector that is important, but not yet dynamic, in the sense of having demonstrated slightly negative total factor productivity growth over the period. This comprises community, personal and social services – notably including education, health and social care. Of course, in service activities like health and social care it’s very easy to mischaracterise as a lowering of productivity a change that actually corresponds to an increase in welfare. On the other hand, I’ve argued elsewhere that we’ve not devoted enough attention to the kinds of technological innovation in health and social care sectors that could deliver genuine productivity increases.

    Real estate comprises a sector that is both significant in size, and has shown significant apparent increases in total factor productivity. This is a point at which I think one should question the nature of the value added. A real estate business makes money by taking a commission on property transactions; hence an increase in property prices, given constant transaction volume, leads to an apparent increase in productivity. Yet I’m not convinced that a continuous increase in property prices represents the economy generating real value for people.

    Finance and insurance represents a significant part of the economy – 7% – but its overall long term increase in total factor productivity is unimpressive, and probably overstated. The importance of this sector in thinking about the UK economy represents a distortion of our political economy.

    The big outlier at the bottom left of the plot is mining and quarrying, whose total factor productivity has dropped by 50% – what isn’t shown is that its share of the economy has substantially fallen over the period too. The biggest contributor to this sector is North Sea oil, whose production peaked around 2000 and which has since been rapidly falling. The drop in total factor productivity does not, of course, mean that technological progress has gone backwards in this sector. Quite the opposite – as the easy oil fields are exhausted, more resource – and better technology – are required to extract what remains. This should remind us of one massive weakness in GDP as a sole measure of economic progress – it doesn’t take account of the balance sheet, of the non-renewable natural resources we use to create that GDP. The North Sea oil has largely gone now and this represents an ongoing headwind to the UK economy that will need more innovation in other sectors to overcome.

    This approach is limited by the way the economy needs to be divided up into sectors. Of course, this sectoral breakdown is very coarse – within each sector there are likely to be outliers with very high total productivity growth which dramatically pull up the average of the whole sector. More fundamentally, it’s not obvious that the complex, networked nature of the modern economy is well captured by these rather rigid barriers. Many of the most successful manufacturing enterprises add big value to their products with the services that come attached to them, for example.

    We can look into the EU Klems data at a slightly finer grained level; the next plot shows importance and dynamism for the various subsectors of manufacturing. This shows well the wide dispersions within the overall sectors – and of course within each of these subsectors there will be yet more dispersion.

    Sub-sector fraction of 2015 economy by GVA contribution versus aggregate total factor productivity growth from 1998 to 2015 for subsectors of manufacturing. Data from EU KLEMS Growth and Productivity Accounts database.

    The results are perhaps unsurprising – areas traditionally considered part of high value manufacturing – transport equipment and chemicals, which include aerospace, automotive, pharmaceuticals and speciality chemicals – are found in the top right quadrant, important in terms of their share of the economy, dynamic in terms of high total factor productivity growth. The good total factor productivity performance of textiles is perhaps more surprising, for an area often written off as part of our industrial heritage. It would be interesting to look in more detail at what’s going on here, but I suspect that a big part of it could be the value that can be added by intangibles like branding and design. Total factor productivity is not just about high tech and R&D, important though the latter is.

    Clearly this is a very superficial look at a very complicated area. Even within the limitations of the EU Klems data set, I’ve not considered how rates of TFP growth have varied by time – before and after the global financial crisis, for example. Nor have I considered the way shifts between sectors have contributed to overall changes in productivity across the economy – I’ve focused only on rates, not on starting levels. And of course, we’re talking here about history, which isn’t always a good guide to the future, where there will be a whole new set of technological opportunities and competitive challenges. But as we start to get serious about industrial strategy, these are the sorts of questions that we need to be looking into.

    Eroom’s law strikes again

    “Eroom’s law” is the name given by pharma industry analyst Jack Scannell to the observation that the productivity of research and development in the pharmaceutical industry has been falling exponentially for decades – discussed in my earlier post Productivity: in R&D, healthcare and the whole economy. The name is an ironic play on Moore’s law, the statement that the number of transistors on an integrated circuit increases exponentially.

    It’s Moore’s law that has underlain the orders of magnitude increases in computing power we’ve grown used to. But if computing power has been increasing exponentially, what can we say about the productivity of the research and development effort that’s underpinned those increases? It turns out that in the semiconductor industry, too, research and development productivity has been falling exponentially. Eroom’s law describes the R&D effort needed to deliver Moore’s law – and the unsustainability of this situation must surely play a large part in the slow-down in the growth in computing power that we are seeing now.

    Falling R&D productivity has been explicitly studied by the economists Nicholas Bloom, Charles Jones, John Van Reenen and Michael Webb, in a paper called “Are ideas getting harder to find?” (PDF). I discussed an earlier version of this paper here – I made some criticisms of the paper, though I think its broad thrust is right. One of the case studies the economists look at is indeed the electronics industry, and there’s one particular problem with their reasoning that I want to focus on here – though fixing this actually makes their overall argument stronger.

    The authors estimate the total world R&D effort underlying Moore’s law, and conclude: “The striking fact, shown in Figure 4, is that research effort has risen by a factor of 18 since 1971. This increase occurs while the growth rate of chip density is more or less stable: the constant exponential growth implied by Moore’s Law has been achieved only by a massive increase in the amount of resources devoted to pushing the frontier forward.”

    R&D expenditure in the microelectronics industry, showing Intel’s R&D expenditure, and a broader estimate of world microelectronics R&D including semiconductor companies and equipment manufacturers. Data from the “Are Ideas Getting Harder to Find?” dataset on Chad Jones’s website. Inflation corrected using the US GDP deflator.

    The growth in R&D effort is illustrated in my first plot, which compares the growth of world R&D expenditure in microelectronics with the growth of computing power. I plot two measures from the Bloom/Jones/van Reenen/Webb data set – the R&D expenditure of Intel, and an estimate of broader world R&D expenditure on integrated circuits, which includes both semiconductor companies and equipment manufacturers (I’ve corrected for inflation using the US GDP deflator). The plot shows an exponential period of increasing R&D expenditure, which levelled off around 2000, to rise again from 2010.

    The weakness of their argument, that increasing R&D effort has been needed to maintain the same rate of technological improvement, is that it selects the wrong output measure. No-one is interested in how many transistors there are per chip – what matters to the user, and the wider economy – is that computing power continues to increase exponentially. As I discussed in an earlier post – Technological innovation in the linear age, the fact is that the period of maximum growth in computing power ended in 2004. Moore’s law continued after this time, but the end of Dennard scaling meant that the rate of increase of computing power began to fall. This is illustrated in my second plot. This, after a plot in Hennessy & Patterson’s textbook Computer Architecture: A Quantitative Approach (6th edn) and using their data, shows the relative computing power of microprocessors as a function of their year of introduction. The solid lines illustrate 52% pa growth from 1984 to 2003, 23% pa growth from 2003 – 201, and 9% pa growth from 2011 – 2014.

    The growth in processor performance since 1988. Data from figure 1.1 in Computer Architecture: A Quantitative Approach (6th edn) by Hennessy & Patterson.

    What’s interesting is that the slowdown in the rate of growth in R&D expenditure around 2000 is followed by a slowdown in the rate of growth of computing power. I’ve attempted a direct correlation between R&D expenditure and rate of increase of computing power in my next plot, which plots the R&D expenditure needed to produce a doubling of computer power as a function of time. This is a bit crude, as I’ve used the actual yearly figures without any smoothing, but it does seem to show a relatively constant increase of 18% per year, both for the total industry and for the Intel only figures.

    Eroom’s law at work in the semiconductor industry. Real R&D expenditure needed to produce a doubling of processing power as a function of time.

    What is the cause of this exponential fall in R&D productivity? A small part reflects Baumol’s cost disease – R&D is essentially a service business done by skilled people, who command wages that reflect the growth of the whole economy rather than their own output (the Bloom et al paper accounts for this to some extent by deflating R&D expenditure by scientific salary levels rather than inflation). But this is a relatively small effect compared to the more general problem of the diminishing returns to continually improving an already very complex and sophisticated technology.

    The consequence seems inescapable – at some point the economic returns of improving the technology will not justify the R&D expenditure needed, and companies will stop making the investments. We seem to be close to that point now, with Intel’s annual R&D spend – $12 billion in 2015 – only a little less than the entire R&D expenditure of the UK government, and the projected cost of doubling processor power from here exceeding $100 billion. The first sign has been the increased concentration of the industry. For the 10 nm node, only four companies remained in the game – Intel, Samsung, the Taiwanese foundry company TSMC, and GlobalFoundries, which acquired the microelectronics capabilities of AMD and IBM. As the 7 nm node is being developed, GlobalFoundries has announced that it too is stepping back from the competition to produce next-generation chips, leaving only 3 companies at the technology frontier.

    The end of this remarkable half-century of exponential growth in computing power has arrived – and it’s important that economists studying economic growth come to terms with this. However, this doesn’t mean innovation comes to an end too. All periods of exponential growth in particular technologies must eventually saturate, whether that’s as a result of physical or economic limits. In order for economic growth to continue, what’s important is that entirely new technologies must appear to replace them. The urgent question we face is what new technology is now on the horizon, to drive economic growth from here.

    Innovation, regional economic growth, and the UK’s productivity problem

    A week ago I gave a talk with this title at a conference organised by the Smart Specialisation Hub. This organisation was set up to help regional authorities in developing their economic plans; given the importance of local industrial strategies in the government’s overall industrial strategy its role becomes all the more important.

    Other speakers at the conference represented central government, the UK’s innovation agency InnovateUK, and the Smart Specialisation Hub itself. Representing no-one but myself, I was able to be more provocative in my own talk, which you can download here (PDF, 4.7 MB).

    My talk had four sections. Opening with the economic background, I argued that the UK’s stagnation in productivity growth and regional economic inequality has broken our political settlement. Looking at what’s going on in Westminster at the moment, I don’t think this is an exaggeration.

    I went on to discuss the implications of the 2.4% R&D target – it’s not ambitious by developed world standards, but will be a stretch from our current position, as I discussed in an earlier blogpost: Reaching the 2.4% R&D intensity target.

    Moving on to the regional aspects of research and innovation policy, I argued (as I did in this blog post: Making UK Research and Innovation work for the whole UK) that the UK’s regional concentration of R&D (especially public sector) is extreme and must be corrected. To illustrate this point, I used this version of Tom Forth’s plot splitting out the relative contributions of public and private sector to R&D regionally.

    I argued that this plot gives a helpful framework for thinking about the different policy interventions needed in different parts of the country. I summarised this in this quadrant diagram [1].

    Finally, I discussed the University of Sheffield’s Advanced Manufacturing Research Centre as an example of the kind of initiative that can help regenerate the economy of a de-industrialised area. Here a focus on translational research & skills at all levels both drives inward investment by international firms at the technology frontier & helps the existing business base upgrade.

    I set this story in the context of Shih and Pisano’s notion of the “industrial commons” [2] – a set of resources that supports the collective knowledge, much of it tacit, that drives innovations in products and processes in a successful cluster. A successful industrial commons is rooted in a combination of large anchor companies & institutions, networks of supplying companies, R&D facilities, informal knowledge networks and formal institutions for training and skills. I argue that a focus of regional economic policy should be a conscious attempt to rebuild the “industrial commons” in an industrial sector which allows the opportunities of new technology to be embraced, yet which works with grain of the existing industry and institutional base. The “smart specialisation” framework is a good framework for identifying the right places to look.

    1. As a participant later remarked, I’ve omitted the South East from this diagram – it should be in the bottom right quadrant, albeit with less business R&D than East Anglia, though with the benefits more widely spread.

    2. See Pisano, G. P., & Shih, W. C. (2009). Restoring American Competitiveness. Harvard Business Review, 87(7-8), 114–125.

    The semiconductor industry and economic growth theory

    In my last post, I discussed how “econophysics” has been criticised for focusing on exchange, not production – in effect, for not concerning itself with the roots of economic growth in technological innovation. Of course, some of that technological innovation has arisen from physics itself – so here I talk about what economic growth theory might learn from an important episode of technological innovation with its origins in physics – the development of the semiconductor industry.

    Economic growth and technological innovation

    In my last post, I criticised econophysics for not talking enough about economic growth – but to be fair, it’s not just econophysics that suffers from this problem – mainstream economics doesn’t have a satisfactory theory of economic growth either. And yet economic growth and technological innovation provides an all-pervasive background to our personal economic experience. We expect to be better off than our parents, who were themselves better off than our grandparents. Economics without a theory of growth and innovation is like physics without an arrow of time – a marvellous intellectual construction that misses the most fundamental observation of our lived experience.

    Defenders of economics at this point will object that it does have theories of growth, and there are even some excellent textbooks on the subject [1]. Moreover, they might remind us, wasn’t the Nobel Prize for economics awarded this year to Paul Romer, precisely for his contribution to theories of economic growth? This is indeed so. The mainstream approach to economic growth pioneered by Robert Solow regarded technological innovation as something externally imposed, and Romer’s contribution has been to devise a picture of growth in which technological innovation arises naturally from the economic models – the “post-neoclassical endogenous growth theory” that ex-Prime Minister Gordon Brown was so (unfairly) lampooned for invoking.

    This body of work has undoubtedly highlighted some very useful concepts, stressing the non-rivalrous nature of ideas and the economic basis for investments in R&D, especially for the day-to-day business of incremental innovation. But it is not a theory in the sense a physicist might understand that – it doesn’t explain past economic growth, so it can’t make predictions about the future.

    How the information technology revolution really happened

    Perhaps to understand economic growth we need to turn to physics again – this time, to the economic consequences of the innovations that physics provides. Few would disagree that a – perhaps the – major driver of technological innovation, and thus economic growth, over the last fifty years has been the huge progress in information technology, with the exponential growth in the availability of computing power that is summed up by Moore’s law.

    The modern era of information technology rests on the solid-state transistor, which was invented by William Shockley at Bell Labs in the late 1940’s (with Brattain and Bardeen – the three received the 1956 Nobel Prize for Physics). In 1956 Shockley left Bell Labs and went to Palo Alto (in what would later be called Silicon Valley) to found a company to commercialise solid-state electronics. However, his key employees in this venture soon left – essentially because he was, by all accounts, a horrible human being – and founded Fairchild Semiconductors in 1957. Key figures amongst those refugees were Gordon Moore – of eponymous law fame – and Robert Noyce. It was Noyce who, in 1960, made the next breakthrough, inventing the silicon integrated circuit, in which a number of transistors and other circuit elements were combined on a single slab of silicon to make a integrated functional device. Jack Kilby, at Texas Instruments, had, more or less at the same time, independently developed an integrated circuit on germanium, for which he was awarded the 2000 Physics Nobel prize (Noyce, having died in 1990, was unable to share this). Integrated circuits didn’t take off immediately, but according to Kilby it was their use in the Apollo mission and the Minuteman ICBM programme that provided a turning point in their acceptance and widespread use[2] – the Minuteman II guidance and control system was the first mass produced computer to rely on integrated circuits.

    Moore and Noyce founded the electronics company Intel in 1968, to focus on developing integrated circuits. Moore had already, in 1965, formulated his famous law about the exponential growth with time of the number of transistors per integrated circuit. The next step was to incorporate all the elements of a computer on a single integrated circuit – a single piece of silicon. Intel duly produced the first commercially available microprocessor – the 4004 – in 1971, though this had been (possibly) anticipated by the earlier microprocessor that formed the flight control computer for the F14 Tomcat fighter aircraft. From these origins emerged the microprocessor revolution and personal computers, with its giant wave of derivative innovations, leading up to the current focus on machine learning and AI.

    Lessons from Moore’s law for growth economics

    What should clear from this very brief account is that classical theories of economic growth cannot account for this wave of innovation. The motivations that drove it were not economic – they arose from a powerful state with enormous resources at its disposal pursuing complex, but entirely non-economic projects – such as the goal of being able to land a nuclear weapon on any point of the earth’s surface with an accuracy of a few hundred meters.

    Endogenous growth theories perhaps can give us some insight into the decisions companies made about R&D investment and the wider spillovers that such spending led to. They would need to take account of the complex institutional landscape that gave rise to this innovation. This isn’t simply a distinction between public and private sectors – the original discovery of the transistor was made at Bell Labs – nominally in the private sector, but sustained by monopoly rents arising from government action.

    The landscape in which this innovation took place seems much more complex than growth economics, with its array of firms employing undifferentiated labour, capital, all benefiting from some kind of soup of spillovers seems able to handle. Semiconductor fabs are perhaps the most capital intensive plants in the world, with just a handful of bunny-suited individuals tending a clean-room full of machines that individually might be worth tens or even hundreds of millions of dollars. Yet the value of those machines represents, as much as anything physical, the embodied value of the intangible investments in R&D and process know-how.

    How are the complex networks of equipment and materials manufacturers coordinated to make sure technological advances in different parts of this system happen at the right time and in the right sequence? These are independent companies operating in a market – but the market alone has not been sufficient to transmit the information needed to keep it coordinated. An enormously important mechanism for this coordination has been the National Technology Roadmap for Semiconductors (later the International Technology Roadmap for Semiconductors), initiated by a US trade body, the Semiconductor Industry Association. This was an important social innovation which allowed companies to compete in meeting collaborative goals; it was supported by the US government by the relaxation of anti-trust law and the foundation of a federally funded organisation to support “pre-competitive” research – SEMATECH.

    The involvement of the US government reflected the importance of the idea of competition between nation states in driving technological innovation. Because of the cold war origins of the integrated circuits, the original competition was with the Soviet Union, which created an industry to produce ICs for military use, based around Zelenograd. The degree to which this industry was driven by indigenous innovation as against the acquisition of equipment and know-how from the west isn’t clear to me, but it seems that by the early 1980’s the gap between Soviet and US achievements was widening, contributing to the sense of stagnation of the later Brezhnev years and the drive for economic reform under Gorbachev.

    From the 1980’s, the key competitor was Japan, whose electronics industry had been built up in the 1960’s and 70’s driven not by defense, but by consumer products such as transistor radios, calculators and video recorders. In the mid-1970’s the Japanese government’s MITI provided substantial R&D subsidies to support the development of integrated circuits, and by the late 1980’s Japan appeared within sight of achieving dominance, to the dismay of many commentators in the USA.

    That didn’t happen, and Intel still remains at the technological frontier. Its main rivals now are Korea’s Samsung and Taiwan’s TSMC. Their success reflects different versions of the East Asian developmental state model; Samsung is Korea’s biggest industrial conglomerate (or chaebol), whose involvement in electronics was heavily sponsored by its government. TSMC was a spin-out from a state-run research institute in Taiwan, ITRI, which grew by licensing US technology and then very effectively driving process improvements.

    Could one build an economic theory that encompasses all this complexity? For me, the most coherent account has been Bill Janeway’s description of the way government investment combines with the bubble dynamics that drives venture capitalism, in his book “Doing Capitalism in the Innovation Economy”. Of course, the idea that financial bubbles are important for driving innovation is not new – that’s how the UK got a railway network, after all – but the econophysicist Didier Sornette has extended this to introduce the idea of a “social bubble” driving innovation[3].

    This long story suggests that the ambition of economics to “endogenise” innovation is a bad idea, because history tells us that the motivations for some of the most significant innovations weren’t economic. To understand innovation in the past, we don’t just need economics, we need to understand politics, history, sociology … and perhaps even natural science and engineering. The corollary of this is that devising policy solely on the basis of our current theories of economic growth is likely to lead to disappointing outcomes. At a time when the remarkable half-century of exponential growth in computing power seems to be coming to an end, it’s more important than ever to learn the right lessons from history.

    [1] I’ve found “Introduction to Modern Economic Growth”, by Daron Acemoglu, particularly useful

    [2] Jack Kilby: Nobel Prize lecture, https://www.nobelprize.org/uploads/2018/06/kilby-lecture.pdf

    [3] See also that great authority, The Onion “Recession-Plagued Nation Demands New Bubble to Invest In

    The Physics of Economics

    This is the first of two posts which began life as a single piece with the title “The Physics of Economics (and the Economics of Physics)”. In the first section, here, I discuss some ways physicists have attempted to contribute to economics. In the second half, I turn to the lessons that economics should learn from the history of a technological innovation with its origin in physics – the semiconductor industry.

    Physics and economics are two disciplines which have quite a lot in common – they’re both mathematical in character, many of their practitioners are not short of intellectual self-confidence – and they both have imperialist tendencies towards their neighbouring disciplines. So the interaction between the two fields should be, if nothing else, interesting.

    The origins of econophysics

    The most concerted attempt by physicists to colonise an area of economics is in the area of the behaviour of financial markets – in the field which calls itself “econophysics”. Actually at its origins, the traffic went both ways – the mathematical theory of random walks that Einstein developed to explain the phenomenon of Brownian motion had been anticipated by the French mathematician Bachelier, who derived the theory to explain the movements of stock markets. Much later, the economic theory that markets are efficient brought this line of thinking back into vogue – it turns out that financial markets can be quite often modelled as simple random walks – but not quite always. The random steps that markets take aren’t drawn from a Gaussian distribution – the distribution has “fat tails”, so rare events – like big market crashes – aren’t anywhere like as rare as simple theories assume.

    Empirically, it turns out that the distributions of these rare events can sometimes be described by power laws. In physics power laws are associated with what are known as critical phenomena – behaviours such as the transition from a liquid to a gas or from a magnet to a non-magnet. These phenomena are characterised by a certain universality, in the sense that the quantitative laws – typically power laws – that describe the large scale behaviour of these systems doesn’t strongly depend on the details of the individual interactions between the elementary objects (the atoms and molecules, in the case of magnetism and liquids) whose interaction leads collectively to the larger scale phenomenon we’re interested in.

    For “econophysicists” – whose background often has been in the study of critical phenomenon – it is natural to try and situate theories of the movements of financial markets in this tradition, finding analogies with other places where power laws can be found, such as the distribution of earthquake sizes and the behaviour of sand-piles. In terms of physicists’ actual impact on participants in financial markets, though, there’s a paradox. Many physicists have found (often very lucrative) employment as quantitative traders, but the theories that academic physicists have developed to describe these markets haven’t made much impact on the practitioners of financial economics, who have their own models to describe market movements.

    Other ideas from physics have made their way into discussions about economics. Much of classical economics depends on ideas like the “representative household” or the “representative firm”. Physicists with a background in statistical mechanics recognise this sort of approach as akin to a “mean field theory”. The idea that a complex system is well represented by its average member is one that can be quite fruitful, but in some important circumstances fails – and fails badly – because the fluctuations around the average become as important as the average itself. This motivates the idea of agent based models, to which physicists bring the hope that even simple “toy” models can bring insight. The Schelling model is one such very simple model that came from economics, but which has a formal similarity with some important models in physics. The study of networks is another place where one learns that the atypical can be disproportionately important.

    If markets are about information, then physics should be able to help…

    One very attractive emerging application of ideas from physics to economics concerns the place of information. Friedrich Hayek stressed the compelling insight that one can think of a market as a mechanism for aggregating information – but a physicist should understand that information is something that can be quantified, and (via Shannon’s theory) that there are hard limits on how much information can transmitted in a physical system . Jason Smith’s research programme builds on this insight to analyse markets in terms of an information equilibrium[1].

    Some criticisms of econophysics

    How significant is econophysics? A critique from some (rather heterodox) economists – Worrying trends in econophysics – is now more than a decade old, but still stings (see also this commentary from the time from Cosma Shalizi – Why Oh Why Can’t We Have Better Econophysics? ). Some of the criticism is methodological – and could be mostly summed up by saying, just because you’ve got a straight bit on a log-log plot doesn’t mean you’ve got a power law. Some criticism is about the norms of scholarship – in brief: read the literature and stop congratulating yourselves for reinventing the wheel.

    But the most compelling criticism of all is about the choice of problem that econophysics typically takes. Most attention has been focused on the behaviour of financial markets, not least because these provide a wealth of detailed data to analyse. But there’s more to the economy – much, much more – than the financial markets. More generally, the areas of economics that physicists have tended to apply themselves to have been about exchange, not production – studying how a fixed pool of resources can be allocated, not how the size of the pool can be increased.

    [1] For a more detailed motivation of this line of reasoning, see this commentary, also from Cosma Shalizi on Francis Spufford’s great book “Red Plenty” – “In Soviet Union, Optimization Problem Solves You”.

    Between promise, fear and disillusion: two decades of public engagement around nanotechnology

    I’m giving a talk with this title at the IEEE Nanotechnology Materials and Devices Conference (NMDC) in Portland, OR on October 15th this year. The abstract is below, and you can read the conference paper here: Between promise, fear and disillusion (PDF).

    Nanotechnology emerged as a subject of public interest and concern towards the end of the 1990’s. A couple of decades on, it’s worth looking back at the way the public discussion of the subject has evolved. On the one hand we had the transformational visions associated with the transhumanist movement, together with some extravagant promises of new industries and medical breakthroughs. The flipside of these were worries about profound societal changes for the worse, and, less dramatically, but the potential for environmental and health impacts from the release of nanoparticles.

    Since then we’ve seen some real achievements in the field, both scientific and technological, but also a growing sense of disillusion with technological progress, associated with slowing economic growth in the developed world. What should we learn from this experience? What’s the right balance between emphasising the potential of emerging technologies and cautioning against over-optimistic claims?

    Read the full conference paper here: Between promise, fear and disillusion (PDF).

    The UK’s top six productivity underperformers

    The FT has been running a series of articles about the UK’s dreadful recent productivity performance, kicked off with this very helpful summary – Britain’s productivity crisis in eight charts. One important aspect of this was to focus on the (negative) contribution of formerly leading sectors of the economy which have, since the financial crisis, underperformed:

    “Computer programming, energy, finance, mining, pharmaceuticals and telecoms — which together account for only one-fifth of the economy — generated three-fifths of the decline in productivity growth.”

    The original source of this striking statistic is a paper by Rebecca Riley, Ana Rincon-Aznar and Lea Samek – Below the Aggregate: A Sectoral Account of the UK Productivity Puzzle.

    What this should stress is that there’s no single answer to the productivity crisis. We need to look in detail at different industrial sectors, different regions of the UK, and identify the different problems they face before we can work out the appropriate policy responses.

    So what can we say about what’s behind the underperformance of each of these six sectors, and what lessons should policy-makers learn in each case? Here are a few preliminary thoughts.

    Mining. This is dominated by North Sea Oil. The oil is running out, and won’t be coming back – production peaked in 2000; what oil is left is more expensive and difficult to get out.
    Lessons for policy makers: more recognition is needed that the UK’s prosperity in the 90’s and early 2000’s depended as much on the accident of North Sea oil as any particular strength of the policy framework.

    Finance. It’s not clear to me how much of the apparent pre-crisis productivity boom was real, but post-crisis increased regulation and greater capital requirements have reduced apparent rates of return in financial services. This is as it should be.
    Lessons for policy makers: this sector is the problem, not the solution, so calls to relax regulation should be resisted, and so-called “innovation” that in practise amounts to regulatory arbitrage discouraged.

    The end of North Sea oil and the finance bubble cannot be reversed – these are headwinds that the economy has to overcome. We have to find new sources of productivity growth rather than looking back nostalgically at these former glories (for example, there’s a risk that the enthusiasm for fracking and fintech represent just such nostalgia).

    Energy. Here, a post-privatisation dysfunctional pseudo-market has prioritised sweating existing assets rather than investing. Meanwhile there’s been an unclear and inconsistent government policy environment; sometimes the government has willed the ends without providing the means (e.g. nuclear new build), elsewhere it has introduced perverse and abrupt changes of tack (e.g. in its support for onshore wind and solar).
    Lessons for policy makers: develop a rational, long-term energy strategy that will deliver the necessary decarbonisation of the energy economy. Then stick to it, driving innovation to support the strategy. For more details, read chapter 4 – Decarbonisation of the energy economy – of the Industrial Strategy Commission’s final report.

    Computer programming. Here I find myself on less sure ground. Are we seeing the effects of increasing overseas outsourcing and competition, for example to India’s growing IT industry? Are we seeing the effect of more commoditisation of computer programming, with new business models such as “software as a service”?

    Telecoms. Again, here I’m less certain of what’s been going on. Are we seeing the effect of lengthening product cycles as the growth in processor power slows? Is this the effect of overseas competition – for example, rapidly growing Chinese firms like Huawei – moving up the value chain? Here it’s also likely that measurement problems – in correctly accounting for improvements in quality – will be most acute.

    Pharmaceuticals. As my last blogpost outlined, productivity growth in pharmaceuticals depends on new products being developed through formal R&D, their value being protected by patents. There has been a dramatic, long-term fall in the productivity of pharma R&D, so it is unsurprising that this is now feeding through into reduced labour productivity.
    Lessons for policy makers: see the recent NESTA report “The Biomedical Bubble”.

    Many of these issues were already discussed in my 2016 SPERI paper Innovation, research and the UK’s productivity crisis. Two years on, the productivity crisis seems even more pressing, and as the FT series illustrates, is receiving more attention from policy makers and economists (though still not enough, in view of its fundamental importance for living standards and fiscal stability). The lesson I would want to stress is that, to make progress, policy makers and economists need to go beyond generalities, and pay more attention to the detailed particulars of individual industries, sectors and regions, and the different way innovation takes place – or hasn’t being taking place – within them.