Bleach and the industrial revolution in textiles

Sunshine is the best disinfectant, they say – but if you live in Lancashire, you might want to have some bleach as a backup. Sunshine works to bleach clothes and hair too – and before the invention of the family of chlorine based chemicals that are commonly known as bleach, the Lancashire textile industry – like all other textile industries around the world – depended on sunshine to whiten the naturally beige colour of fabrics like cotton and linen. It’s this bright whiteness that has always been prized in fine fabrics, and is a necessary precondition for creating bright colours and patterns through dyeing.

As the introduction of new machinery to automate spinning and weaving – John Kay’s flying shuttle, the water frame, and Crompton’s spinning mule – hugely increased the potential output of the textile industry, the need to rely on Lancashire’s feeble sunshine to bleach fabrics in complex processes that could take weeks was a significant blockage. The development of chemical bleaches was a response to this; a significant ingredient of the industrial revolution that is perhaps not widely appreciated enough, and an episode that demonstrates the way scientific and industrial developments went hand-in-hand at the beginning of the modern chemical industry.

It’s not obvious now when one looks at the clothes in 17th and 18th century portraits, with their white dresses, formal shirts and collars, that the brilliant white fabrics that were the marker of their rich and aristocratic subjects were the result of complex and expensive set of processes. Bleaching at the time involved a sequence of repeated steepings in water, boiling in lye, soaping, soaking in buttermilk (and towards the end of this period, dilute sulphuric acid) – together with extensive “grassing” – spreading the fabrics out in the sun in “bleachfields” for periods of weeks. These expensive and time-consuming processes were a huge barrier to the expansion of the textile industry, and it was in response to this barrier that chemical bleaches were developed in the late 18th century.

The story begins with the important French chemist Claude-Louis Berthollet, who in 1785 discovered and characterised the gas we now know as chlorine, synthesising it through the reduction of hydrochloric acid by manganese dioxide. His discovery of what he called “dephlogisticated muriatic acid” [1] was published in France, but news of it quickly reached England, not least through direct communication by Berthollet to the Royal Society in London. Only a year later, the industrialist Matthew Boulton and his engineer partner James Watt were visiting Paris; they met Berthollet, and were able to see his initial experiments showing the effect chlorine had on colours, either using the gas directly or in solution in water. The potential of the new material to transform the textiles industry was obvious both to Berthollet and his visitors from England.

James Watt had a particular reason to be interested in the process – his father-in-law, James McGrigor – owned a bleaching works in Glasgow. Watt had soon developed an improvement to the process for making chlorine; instead of using hydrochloric acid, he used sulphuric acid and salt, exploiting the new availability and relative low cost of sulphuric acid since the development of the lead chamber process in 1746 by John Roebuck and Samuel Garbett. In 1787 he sent a bottle of his newly developed bleach to his father-in-law, and arranged for a ton of manganese dioxide [2] to be sent from Bristol to Glasgow to begin large scale experiments. Work was needed to develop a practical regime for bleaching different fabrics, to find methods to assay the bleaching power of the solutions, and to develop the apparatus of this early chemical engineering – what to make the vessels out of, how to handle the fabric. By the end of the year, with the help of Watt, McGrigor had successfully scaled up the process to bleach 1500 yards of linen.

Meanwhile, two Frenchmen – Antoine Bourboulon de Boneuil and Matthew Vallet – had arrived in Lancashire from Paris, where they had developed a proprietary bleaching solution – “Lessive de Javelle” – which built on Berthollet’s work (without his involvement or approval). This probably used the method of dissolving the chlorine in a solution of sodium hydroxide, which absorbs more of the gas than pure water. This produces a solution of sodium hypochlorite, like the everyday “thin bleach” of today’s supermarket shelves. In 1788 Bourboulon petitioned Parliament to grant them an exclusive 28 year license for the process (a longer period than a regular patent). This caused some controversy and was strongly opposed by the Lancashire bleachers, but placed James Watt in an awkward position. Naturally he opposed the proposal, but didn’t want to do this too publicly, as his own, very broad, patent (with Matthew Boulton) for the steam engine had been extended by Act of Parliament in 1775, leading to lengthy litigation. Nonetheless, after the intervention of Berthollet himself and the growing awareness of the new science of chemical bleaching in the industrial community, Bourboulon only succeeded in obtaining patents for relatively restricted aspects of his process, that were easily evaded by other operations.

Claude-Louis Berthollet’s position in this was important, as his priority in discovering the basic principles of chlorine bleaching was universally accepted. But Berthollet was an exponent of the principles of what would now be called “open science” and consciously repudiated any opportunities to profit from his inventions – as he wrote to James Watt, “I am very conscious of the interest that you take in a project which could be advantageous to me; but to return to my character, I have entirely renounced involvement in commercial enterprises. When one loves science, one has little need of fortune, and it is so easy to expose one’s happiness by compromising one’s peace of mind and embarrassing oneself”. Watt was clearly frustrated by Berthollet’s tendency to publish the results of his experiments, which often included rediscovering the improvements that Watt himself had made.

But by this stage, any secrets were out, and other Manchester industrialists, together with a new breed of what might be called consulting chemists, who kept up with the latest scientific developments in France and England, were experimenting and developing the processes further. Their goals included driving down the cost, increasing the scale of operations, and particularly improving their reliability – it was all too easy to ruin a batch of cloth by exposing it too long or using too strong a bleaching agent, or to poison the workmen with a release of chlorine gas. In fact, one shudders to think about the health and safety record and environmental impact of these early developments. Even by 1795, it still wasn’t always clear that the new methods were cheaper than the old ones, particularly for case of linen, which was significantly more difficult to bleach than cotton. Despite the early introduction of “Lessive de Javelle”, the stability of bleaching fluids was a problem, and most bleachers preferred to brew up their own as needed, guided by lots of practical experience and chemical knowledge.

Bleaching probably could never be made entirely routine, but the next big breakthrough was to create a stable bleaching powder which could be traded, stored and transported, and could be incorporated in a standardised process. Some success had been had by absorbing chlorine in lime. The definitive process to make “bleaching powder” by absorbing chlorine gas in damp slaked lime (calcium hydroxide), to produce a mixture of calcium hypochlorite and calcium chloride, was probably developed by the Scottish chemist Charles Macintosh (more famous as the inventor of the eponymous raincoat). The benefits of this discovery, though, went to Macintosh’s not wholly trustworthy business partner, Charles Tennant, who patented the material in 1799.

What are the lessons we can learn from this episode? It underpins the importance of industrial chemistry, an aspect of the industrial revolution that perhaps is underplayed. It’s a story in which frontier science was being developed at the same time as its industrial applications, with industrialists understanding the importance of being linked in with international networks of scientists, and organisations like the Manchester Literary and Philosophical Society operating as important institutions for diffusing the latest scientific results. It exposes the tensions we still see between open science and the protection of intellectual property, and the questions of who materially benefits from scientific advances.

As the nineteenth century, the textile industry continued to be a major driver of industrial chemistry – the late 18th century saw the introduction of the Leblanc process for making soda-ash, and the nineteenth century saw the massive impact of artificial dyes. These developments influence the industrial geography of England’s northwest to this day.

[1] When Berthollet discovered chlorine, it was in the heyday of the phlogiston theory, so, not appreciating that what he’d discovered was a new gaseous element, he called it “dephlogisticated muriatic acid” (muriatic acid being an old name for hydrochloric acid). As Lavoisier’s oxygen theory became more widely accepted, the gas became known as “oxymuriatic acid”. It was only in 1810 that Humphry Davy showed that chorine contains no oxygen, and is in fact an element in its own right. Phlogiston has a bad reputation as a dubious pre-scientific relic, but it was a rational way of beginning to think about oxidation and reduction, and the nature of heat, giving a helpful guide to experiments – including the ones that eventually showed that the concept was unsustainable.

[2] It’s interesting to ask why there was an existing trade in manganese dioxide. This mineral had been used since prehistory as a black pigment, and is unusual as a strong oxidising agent that is widely found in nature. In Derbyshire it occurs as an impure form known to miners as “wad”; when mixed with linseed oil (as you would do to make a paint) it occasionally has the alarming property of spontaneously combusting. This was recorded in a 1783 communication to the Royal Society by the renowned potter Josiah Wedgwood, who ascribed the discovery to a Derby painter called Mr Bassano, and reported seeing experiments showing this property at the house of the President of the Royal Society, Sir Joseph Banks. Spontaneous combustion isn’t a great asset for a paint, but at lower loadings of manganese dioxide a less dramatic acceleration of the oxidation of linseed oil is useful in making varnish harden more quickly, and it was apparently this property that led to its widespread use in paints and varnishes, particularly for ships in the great expansion of the British Navy at the time. More pure deposits of manganese dioxide were found in Devon, and subsequently in North Wales, as the bleach industry increased demand for the mineral further. The material gained even more importance following Robert Mushet’s work on iron-manganese alloys – it was the incorporation of small amounts of manganese that made the Bessemer process for the first truly mass produced steel viable.

[3] Sources: this account relies heavily on “Science and Technology in the Industrial Revolution”, by A. E. Musson and E. Robinson. For wad, “Derbyshire Wad and Umber”, by T.D. Ford, Mining History 14 p39.

Edited 23/8/21 to make clear that Bourboulon’s petition to Parliament was for a longer period of exclusivity than a standard patent. My thanks to Anton Howes for pointing this out.

Reflections on the UK’s new Innovation Strategy

The UK published an Innovation Strategy last week; rather than a complete summary and review, here are a few of my reflections on it. It’s a valuable and helpful document, though I don’t think it’s really a strategy yet, if we expect a strategy to give a clear sense of a destination, a set of plans to get there and some metrics by which to measure progress. Instead, it’s another milestone in a gradual reshaping of the UK’s science landscape, following last year’s R&D Roadmap, and the replacement of the previous administration’s Industrial Strategy – led by the Department of Business, Energy and Industrial Strategy – by a Treasury driven “Plan for Growth”.

The rhetoric of the current government places high hopes on science as a big part of the UK’s future – a recent newspaper article by the Prime Minister promised that “We want the UK to regain its status as a science superpower, and in so doing to level up.” There is a pride in the achievements of UK science, not least in the recent Oxford Covid vaccine. And yet there is a sense of potential not fully delivered. Part of this is down to investment – or the lack of it: as the PM correctly noted: “this country has failed for decades to invest enough in scientific research, and that strategic error has been compounded by the decisions of the UK private sector.”

Last week’s strategy focused, not on fundamental science, but on innovation. As the old saying goes, “Research is the process of turning money into ideas, innovation is turning ideas into money” – and, it should be added, other desirable outcomes for the nation and society – the necessary transition to zero carbon energy, better health outcomes, and the security of the realm in a world that feels less predictable. But the strategy acknowledges that this process hasn’t been working – we’ve seen a decline in productivity growth that’s unprecedented in living memory.

This isn’t just a UK problem – the document refers to an apparent international slowing of innovation in pharmaceuticals and semiconductors. But the problem is worse in the UK than in comparator nations, and the strategy doesn’t shy away from connecting that with the UK’s low R&D intensity, both public and private: “One key marker of this in the UK is our decline in the rate of growth in R&D spending – both public and private. In the UK, R&D investment declined steadily between 1990 and 2004, from 1.7% to 1.5% of GDP, then gradually returned to be 1.7% in 2018. This has been constantly below the 2.2% OECD average over that period.”

One major aspiration that the government is consistent about is the target to increase total UK investment in R&D (public and private) to reach 2.4% of GDP by 2027, from its current value of about 1.7%. As part of this there is a commitment to increase public spending from £14.9 bn this year to £22 bn – by a date that’s not specified in the Innovation Strategy. An increase of this scale should prompt one to ask whether the institutional landscape where research is done is appropriate, and the document announces a new review of that landscape.

Currently the UK’s public research infrastructure is dominated by universities to a degree that is unusual amongst comparator nations. I’m glad to see that the Innovation Strategy doesn’t indulge in what seems to be a widespread urge in other parts of government to denigrate the contribution of HE to the UK’s economy, noting that “in recent years, UK universities have become more effective at attracting investment and bringing ideas to market. Their performance is now, in many respects, competitive with the USA in terms of patents, spinouts, income from IP and proportion of industrial research.” But it is appropriate to ask whether other types of research institution, with different incentive structures and funding arrangements, might be needed in addition to – and to make the most of – the UK’s academic research base.

But there are a couple of fundamentally different types of non-university research institutions. On the one hand, there are institutions devoted to pure science, where investigators have maximum freedom to pursue their own research agendas. Germany’s Max Planck Institutes offer one model, while the Howard Hughes Medical Institute’s Janelia Research Campus, in the USA, has some high profile admirers in UK policy circles. On the other hand, there are mission-oriented institutes devoted to applied research, like the Fraunhofer Institutes in Germany, the Industrial Technology Research Institute in Taiwan, and IMEC (the Interuniversity Microelectronics Centre) in Belgium. The UK has seen a certain amount of institutional evolution in the last decade already, with the establishment of the Turing Institute, the Crick Institute, the Henry Royce Institute, the Rosalind Franklin Institute, the network of Catapult Centres, to name a few. It’s certainly timely to look across the landscape as it is now to see the extent to which these institutions’ missions and the way they fit together in a wider system have crystallised, as well as to ask whether the system as a whole is delivering the outcomes we want as a society.

There is one inescapable factor about the institutional landscape we have now that is seriously underplayed – that is that what we have now is a function of the wider political and economic landscape – and the way that’s changed over the decades. For example, there’s a case study in the Innovation Strategy of Bell Laboratories in the USA. This was certainly a hothouse of innovation in its heyday, from the 1940’s to the 1980’s – but that reflected its unique position, as a private sector laboratory that was sustained by the monopoly rents of its parent. But that changed with the break-up of the Bell System in the 1980’s, itself a function of the deregulatory turn in US politics at the time, and the institution is now a shadow of its former self. Likewise, it’s impossible to understand the drastic scaling back of government research laboratories in the UK in the 1990’s without appreciating the dramatic policy shifts of governments in the 80’s and 90’s. A nation’s innovation landscape reflects wider trends in political economy, and that needs to be understood better and the implications made more explicit.

With the Innovation Strategy was published a “R&D People and Culture Strategy”. This contains lots of aspirations that few would disagree with, but not much in the way of concrete measures to fix things. To connect this with the previous discussion, I would have liked to have seen much more discussion of the connection between the institutional arrangements we have for research, the incentive structure produced by those arrangements, and the culture that emerges. It’s a reasonable point to complain that people don’t move as easily from industry to academia and back as they used too, but it needs to be recognised that this is because the two have drifted apart; with only a few exceptions, the short term focus of industry – and the high pressure to publish on academics – makes this mobility more difficult. From this perspective, one question we should ask about our institutional landscape, is whether it is the right one to allow the people in the system to flourish and fulfil their potential?

We shouldn’t just ask in what kind of institutions research is done, but also where those are institutions situated geographically. The document contains a section on “Levelling Up and innovation across the UK”, reasserting as a goal that “we need to ensure more places in the UK host world-leading and globally connected innovation clusters, creating more jobs, growth and productivity in those areas.” In the context of the commitment to increase the R&D intensity of the economy, “we are reviewing how we can increase the proportion of total R&D investment, public and private, outside London, the South East, and East of England.”

The big news here, though, is that the promised “R&D and Place Strategy” has been postponed and rolled into the forthcoming “Levelling Up” White Paper, expected in the autumn. If this does take the opportunity of considering in a holistic way how investments in transport, R&D, skills and business support can be brought together to bring about material changes in the productivity of cities and regions that currently underperform, that is not a bad thing. I was a member of the advisory group for the R&D and Place strategy, so I won’t dwell further on this issue here, beyond saying that I recognise many of the issues and policy proposals which that body has discussed, so I await the final “Levelling Up” White Paper with interest.

A strategy does imply some prioritisation, and there are a number of different ways in which one might define priorities. The Coalition Government defined 8 Great Technologies; the 2017 Industrial Strategy was built around “Grand Challenges” and “Sector Deals” covering industrial sectors such as Automotive and Aerospace. The current Innovation Strategy introduces seven “technology families” and a new “Innovation Missions Programme”.

It’s interesting to compare the new “seven technology families” with the old “eight great technologies”. For some the carry over is fairly direct, albeit with some wording changes reflecting shifting fashions – robotics and autonomous systems becomes robotics and smart machines, energy and its storage becomes energy and environment technologies, advanced materials and nanotechnology becomes advanced materials and manufacturing, synthetic biology becomes engineering biology. At least two of the original 8 Great Technologies always looked more like industry sectors than technologies – satellites and commercial applications of space, and agri-science. Big data and energy-efficient computing has evolved into AI, digital and advanced computing, reflecting a genuine change in the technology landscape. Regenerative medicine looks like it’s out of favour, replaced in the biomedical area by bioinformatics and genomics. Quantum technology became appended to the “8 great” a year or two later, and this is now expanded to electronics, photonics and quantum.

Interesting thought the shifts in emphasis may be, the key issue is the degree to which these high level priorities are translated into different outcomes in institutions and funding programmes. How, for example, are these priority technology families reflected in advisory structures at the level of UKRI and the research councils? And, most uncomfortable of all, a decision to emphasise some technology families must imply, if it has any real force, a corresponding decision to de-emphasise some others.

One suspects that organisation through industrial sectors is out of favour in the new world where HM Treasury is in the driving seat; for HMT a focus on sectors is associated with incumbency bias, with newer fast-growing industries systematically under-represented, and producer capture of relevant government departments and agencies, leading to a degree of policy attention that reflects a sector’s lobbying effectiveness rather than its importance to the economy.

Despite this colder new environment, the ever opportunistic biomedical establishment has managed to rebrand their sector deal as a “Life Sciences Vision”. The sector lens remains important, though, because industrial sectors do face their own individual issues, all the more so at a time of rapid change. Successfully negotiating the transition to electric vehicles represents an existential challenge to the automotive sector, while for the persistently undervalued chemicals sector, withdrawal from the EU regulatory framework – REACH – threatens substantial extra costs and frictions, while the transition to net zero presents both a challenge for this energy intensive industry, and a huge set of new potential markets as the supply chain for new clean-tech industries like batteries is developed.

One very salutary clarification has emerged as a side-effect of the pandemic. The vaccination programme can be held up as a successful exemplar of an “innovation mission”. This emphasises that a “mission” shouldn’t just be a vague aspiration, but a specific engineering project with a product at the end of it – with a matching social infrastructure developed to ensure that the technology is implemented to deliver the desired societal outcome. Thought of this way, a mission can’t just be about discovery science – it may need the development of new manufacturing capacity, new ICT systems, repurposing of existing infrastructures. Above all, a mission needs to be executed with speed, decisiveness, and a willingness to spend money in more than homeopathic quantities, characteristics that aren’t strongly associated with recent UK administrations.

What further innovation missions can we expect? It isn’t characterised in these terms, but the project to build a prototype power fusion reactor – the “Spherical Tokamak for Energy Production” – could be thought as another one. By no means guaranteed to succeed, it would be a significant development if it did work, and in the meantime it probably will support the spinning out of a number of potentially important technologies for other applications, such as new materials for extreme environments, and further developments in robotics.

Who will define future “innovation missions”? The answer seems to be the new National Science and Technology Council, to be chaired by the Prime Minister and run by the government’s Chief Scientific Advisor, Sir Patrick Vallance, given an expanded role and an extra job title – National Technology Adviser. In the words of the Prime Minister, “It will be the job of the new National Science and Technology Council to signal the challenges – perhaps even to specify the breakthroughs required – and we hope that science, both public and commercial, will respond.”

But here there’s a lot to fill in terms of the mechanisms of how this will work. How will the NSTC make its decisions – who will be informing those discussions? And how will those decisions be transmitted to the wider innovation ecosystem – government departments and their delivery agencies like UKRI, and its component research councils and innovation agency InnovateUK? There is a new system emerging here, but the way it will be wired is as yet far from clear.