AI and the problems of protein folding

The problem of predicting protein structure from sequence has been definitively solved by the AI programme AlphaFold, winning a well-deserved Nobel prize for its developers. But structure prediction is just one of at least four different problems of protein folding.  Here I introduce four different problems of protein folding: protein structure prediction, the nature of the protein folding transition, the role of proteins that don’t fold at all, and the importance of protein misfolding, particularly for diseases like Alzheimer’s disease. 

The most important contributions yet made by machine learning and artificial intelligence to science so far are unquestionably DeepMind’s AlphaFold programmes for protein structure prediction, for which Demis Hassabis & John Jumper won the Nobel prize in chemistry in 2021 (shared with David Baker, for closely related work).  Proteins are linear macromolecules; each type of protein has a unique one dimensional sequence of amino acids. For many proteins, this 1d sequence encodes a unique three dimensional structure, and it’s this 3d structure which underpins the function of the protein in the operations of the living cell.  AlphaFold takes the 1d sequence of a protein and predicts the 3d structure.  This is the problem of protein structure prediction, outstanding for half a century, now definitively solved by AI.  

The way in which the 1d information in the protein sequence is converted to the 3d information in the structure is known as the problem of protein folding.  In part, this is a problem of information – how the 1d amino acid sequence, which itself is a direct mapping of the genetic code stored on the sequences of DNA that constitute a gene, is mapped onto a single three dimensional structure, the native state,, in which the relative positions in space of each of the amino acids along the chain is uniquely specified.  

It’s this problem of information that AlphaFold has solved – if you know the sequence of a previously unknown protein, AlphaFold will give you a prediction for its structure.  This is useful because the sequence is easy and cheap to determine, but it’s time-consuming and hard to measure the 3d structure.  It’s important because it should help the design of new drugs and vaccines. For example, if one knows the shape of particular proteins in pathogenic viruses or bacteria, one can design molecules that bind to those proteins to stop them working properly.

A protein in its native state.  The enzyme alpha-amylase, which converts starch into glucose.  Left: a space filling rendering of the molecule by David Goodsell, from the Protein Database Molecule of the Month, https://pdb101.rcsb.org/motm/74, CC-BY-4.0 license. Right: a schematic diagram showing helical regions and regions of beta-sheet (broad arrows).  Image from the RCSB Protein Database (RCSB.org), of molecule 1PPI https://www.rcsb.org/structure/1PPI. Data: M. Qian et al, (1994) Biochemistry 33: 6284-6294

But there’s also a physical problem of protein folding – how does an unfolded protein molecule, a loose, random coil, constantly changing shape as it is buffeted by Brownian motion, find its way through an astronomically large number of possible arrangements to find the unique native state which is needed for it fulfil its biological function?  

AlphaFold is a deep learning programme – it’s trained to find correlations between protein structure and sequence from large experimental datasets.  It uses two datasets: one consists of 100,000+ proteins of known sequence whose structures have been experimentally determined.  The other, much larger dataset, compares the sequences of homologous proteins from different species, whose structures are likely to be similar.   But the physical aspect of the protein folding problem – understanding the nature of the protein folding transition, and the pathway the molecule must take to arrive at a single structure – isn’t addressed by AlphaFold.

A good starting point for thinking about the physical protein folding problem is to recognise that one can divide up the 20 amino acids that proteins are made from into two rough categories – hydrophobic and hydrophilic.  It’s easy to understand why a protein molecule in water would arrange itself in a globule with the hydrophobic groups in the middle, protected from contact with water by a layer of hydrophilic (often charged) groups. This would be like a single molecule version of a soap micelle.  

But if this was all there was to it, there wouldn’t be a single native state – there are likely to be many possible structures with the hydrophobic groups in the middle and the hydrophobic states on the outside. In a well-folded protein, the well folded state must be a single state with the lowest possible energy (free energy, to be accurate).

At a qualitative level, at least, a good understanding of the nature of the protein folding transition has been achieved through the use of computer simulations.  There isn’t enough computer power to simulate a protein molecule of any size realistically, but one can make progress with highly simplified models.  The key insight from this kind of work is that the property of foldability – the existence of a single native state, and of pathways to find that state from many starting points – is not guaranteed.  Foldability is itself an evolved property.

What about proteins that don’t fold, or fold wrongly? 

It’s long been known that some proteins don’t have a folded state – one example familiar in everyday life is casein, the main protein in milk that is so important in cheese-making.  But one of the surprises of the last couple of decades is the discovery that a surprisingly high proportion of proteins are either entirely disordered, or contain long regions that are disordered.  Intrinsically disordered proteins have no native structure to be determined by classical techniques like x-ray diffraction, and this perhaps is one of the reasons why their importance was neglected for so long.  

These intrinsically disordered proteins, and proteins with large intrinsically disordered regions, are particularly prevalent in eukaryotes, where they clearly have important functional roles.  Around 30% of all proteins in human cells are disordered, with another 20% containing substantial intrinsically disordered regions.  The importance of  intrinsically disordered proteins is a challenge to traditional ways of thinking about the ways proteins work.   The metaphor that’s often been used is of a lock and key – the idea being that the well defined shape of a protein in its native state will have a cavity whose shape matches a molecule that binds to it.  Molecular interactions involving disordered proteins must necessarily more fluid and promiscuous than this; presumably this flexibility carries with it benefits, as well as creating considerable new complexity.  But as of now, much remains unknown about how this might work.

The importance of protein misfolding has been understood for much longer. To give an everyday example, you can’t hatch a chicken from a hard-boiled egg,  The major component of egg white is a protein called ovalbumin, which is present in egg white in a well-defined folded state.  If one heats up an egg white, ovalbumin partially unfolds.  But as everyone knows, when one cools the egg back down, you don’t recover the gloopy transparent liquid that one started with – the egg white sets as a soft solid.  What’s happened to the ovalumbumin in the egg white is that, instead of each molecule folding individually back to its native states, the proteins link up with each other, forming structures called beta-sheets, in which strands from different protein molecules line up in parallel, bound to each other by hydrogen bonds.

The formation of intermolecular beta sheet is a very common way through which proteins misfold; there is a view that, when protein concentrations are high enough for the molecules to interact, these are the most stable states, more stable than the native state.  The resulting structures are very robust and difficult to undo; they are, in fact, quite closely analogous to the crystal structure of the synthetic polymer nylon, a structure which makes nylon a very strong and tough engineering polymer.  Sometimes this kind of misfolded protein forms a bit of a shapeless mess – as is the case with cooked egg white. But very often it takes a much more regular form, a fibre, in which parallel bundles of hydrogen bonded protein chains lie perpendicular to the axis of the fibre.  These are known as amyloid fibrils, and are notorious for their role in many human diseases.

An amyloid fibril, derived from material taken from the brain of a patient with Alzheimer’s disease. Left: a space filling rendering of the molecule by David Goodsell, from the Protein Database Molecule of the Month, https://pdb101.rcsb.org/motm/189, CC-BY-4.0 license. Right: a schematic diagram of a section of the fibril, showing strands of different protein chains linked together through beta-sheets (broad arrows) perpendicular to the axis of the fibril. Image from the RCSB Protein Database (RCSB.org) of PDB ID 2M4J, https://www.rcsb.org/structure/2M4J. Data: J.X. Lu et al, (2013) Cell 154: 1257-1268

Diseases associated with protein misfolding include the transmissible prion diseases bovine spongiform encephalaly and Creutzfeld-Jacob disease, various types of amyloidosis, and, perhaps most significantly, neurodegenerative diseases like Parkinson’s disease and Alzheimer’s disease.  It’s long been known that Alzheimer’s disease is associate with the formation of amyloid fibrils in the brain, but the mechanism through which misfolded proteins exert toxic effects is not yet known. The association of Alzheimer’s with amyloid fibrils has motivated a large number of drug candidates for the disease; the depressing (and expensive) failure of all these candidates to date suggests that we still have lots to learn about the mechanisms underlying the disease.

To summarise, there are at least four problems of protein folding.  The first, the prediction of 3d structure from 1d sequence, has been definitively solved by AlphaFold.  

For the second, on the nature of the transition between unfolded and folded states, we have some key concepts in place from computer simulation of coarse-grained models, such as the importance of smooth folding pathways, and the idea that foldability is itself an evolved property of proteins. 

The third problem has emerged more recently – it is motivated by the discovery that many proteins – especially in more complex organisms – don’t fold at all, or have significant regions that are intrinsically disordered.  We don’t really know what functions this intrinsic disorder enables, or how those functions are carried out.

The fourth problem is of more long-standing – and in some ways we know less now than we thought we did twenty years ago.  It’s on the causes and consequences of proteins that don’t fold correctly – and in particular the structures that involve multiple protein molecules binding together, typically in the form of fibrils. We know these are associated with a number of serious, often incurable, diseases – but we are still uncertain about the mechanisms at play, and we don’t know how to cure them.

There remain many open problems in connection with protein folding; AI, having solved the problem of predicting structure from sequence, will no doubt contribute to the solution of these other problems.  But there is much new biology – and new physics – that needs to be understood, as well as a continuing need to generate the data that AI needs to operate on.

UK science policy in transition

The way the UK government funds science is currently in the midst of a major transition, with the creation of a much more direct link between the priorities of the government of the day and the kind of research that it funds.  A few months ago I wrote about the likely prospect of a breakdown of a long period of consensus in UK science policy – UK Science in a post-liberal world.  I’m not sure whether the current changes are best thought of as the first manifestation of this breakdown of consensus, or as an attempt to make those changes in the system that are necessary to preserve it.  Here I make a first attempt to set these changes in context.  

Some history

UK governments have recognised the need for the State to fund scientific research since the late 19th century, and some of the principles underpinning that were articulated early in the 20th century. One innovation of that period was the Research Council – conceived as a body standing slightly apart from government, largely managed by expert scientists.  The first of these was the Medical Research Council, established in 1920 as a body incorporated by a Royal Charter.  Subsequently, other research councils, covering other fields of science – and social science and the humanities – were established on the same principles, and various reorganisations have taken place, but the basic model remained in place until 2017.

It is important, however, to understand that for most of this period the research supported by Research Councils amounted to only a small fraction of total government R&D.  Most of this took place with the direct support of government departments, such as those responsible for agriculture, for defence and military procurement, and for atomic energy, often in government research laboratories.  Going into the 1980’s, when the UK was one of the most R&D intensive countries in the world, less than 15% of government funded R&D was supported by the research councils.

Continue reading “UK science policy in transition”

Rock climbing and the economics of innovation (revisited)

The rock-climber Alex Honnold is in the news again, thanks to his live, televised ascent of a skyscraper in Taiwan.  This gives me an excuse to recycle this post from October 2019.  Here I explain that just because Honnold climbs without a rope, that doesn’t mean that his achievement doesn’t rely on technological progress over many decades, contrary to the claim of a well-known economist.

The rock climber Alex Honnold’s free, solo ascent of El Capitan is inspirational in many ways. For economist John Cochrane, watching the film of the ascent has prompted a blogpost: “What the success of rock climbing tells us about economic growth”. He concludes that “Free Solo is a great example of the expansion of ability, driven purely by advances in knowledge, untethered from machines.” As an amateur in both rock climbing and innovation theory, I can’t resist some comments of my own. I think it’s all a bit more complicated than Cochrane thinks. In particular his argument that Honnold’s success tells us that knowledge – and the widespread communication of knowledge – is more important than new technology in driving economic growth doesn’t really stand up.

The film “Free Solo” shows Honnold’s 2017 ascent of the 3000 ft cliff El Capitan, in the Yosemite Valley, California. The climb was done free (i.e. without the use of artificial aids like pegs to make progress), and solo – without ropes or any other aids to safety. How come, Cochrane asks, rock climbers have got so much better at climbing since El Cap’s first ascent in 1958, which took 47 days, done with “siege tactics” and every artificial aid available at the time? “There is essentially no technology involved. OK, Honnold wears modern climbing boots, which have very sticky rubber. But that’s about it. And reasonably sticky rubber has been around for a hundred years or so too.”

Hold on a moment here – no technology? I don’t think the history of climbing really bears this out. Even the exception that Cochrane allows, sticky rubber boots, is more complicated than he thinks. Continue reading “Rock climbing and the economics of innovation (revisited)”

Anglofuturism and the Shock of the Old

As the UK endures the second decade of its crisis of economic stagnation, a loose group of commentators, activists and think-tanks have emerged to argue that this stagnation isn’t inevitable, and to call for more houses and infrastructure to be built, for energy to be cheaper and more abundant, and for a restoration of the technological optimism of earlier times.  It’s not an entirely homogenous movement – some call themselves “Anglofuturists”, others organise under the banners of “progress” and “abundance”.  As I wrote a year ago in my piece “Taking Anglofuturism seriously”, I am sympathetic to some of the goals of this movement. I agree that our economic stagnation isn’t inevitable and that the UK’s physical infrastructure needs upgrading, I regret the failure of recent new nuclear build plans, and I think that technological innovation is a key driver of productivity growth.  Yet to me there seems to be a gap in the movement between willing the ends and identifying the means, with the suggested remedy all too often coming down simply to calls to deregulate more and reform the planning laws.

There is perhaps a lesson from history here, emphasised by some comments the historian David Edgerton made in a podcast last week.  The kind of nation that Anglofuturists call for looks rather like what was delivered by post-war British governments between 1950 and 1980.  Then, the UK was one of the most R&D intensive economies in the world, with a cross-party consensus that technological innovation would deliver economic growth.  Despite persistent national soul-searching about a ruling-class trained in the humanities, a number of scientists and engineers rose to powerful and influential positions.  The world’s first nuclear power station was designed and built in just four years, following which there was a large-scale roll out of nuclear power stations. A national capability for launching satellites was developed (and subsequently abandoned).  This period saw the construction of most of our current motorway network, and, as my plot shows, new houses were built at a rate that has never since been matched.  In this sense there is a certain retro quality to Anglofuturism, a harking back to a time when the UK seemed to look to the future with technological self-confidence.

Continue reading “Anglofuturism and the Shock of the Old”

The decline of UK industry wasn’t caused by high energy prices, but they’re a big problem now, for what’s left of it

Energy prices in the UK have dropped back from the heights they reached following Russia’s invasion of Ukraine in 2022, but they remain high, both by historical standards, and in comparison with other nations. This is undoubtedly putting big strain on energy-intensive industries like chemicals and steel production, in turn putting pressure on the UK’s wider manufacturing sector.  Manufacturing is already a smaller part of the economy in the UK compared to other nations, as I discussed here a few months ago.

Left axis: manufacturing share of the UK economy by GVA.  Dashed line: Data from Bank of England, Millennium of Macro Data.  Dotted line: Data from ONS 2025 Blue Book.  Right axis: Index of industrial energy costs, excluding climate change levy, corrected for inflation with GDP deflator. Source:  DESNZ.

Yet it would be a mistake to blame high energy prices for the historical decline in the importance of manufacturing.  My plot compares the manufacturing share of the economy by value with an index of the real cost of energy for industry.  It’s clear that manufacturing has been declining in importance in the economy for more than half a century.  What’s interesting, though, is that the decade with steepest relative drop was between 1995 and 2005.  Far from being a period of high energy prices, this was a period when energy prices were low and falling.  

Continue reading “The decline of UK industry wasn’t caused by high energy prices, but they’re a big problem now, for what’s left of it”

Putting fusion power on the UK grid

The UK government has a very ambitious plan for nuclear fusion, which I don’t think is widely enough known about.  The plan is to build a pilot nuclear fusion plant able to deliver electrical power to the grid by 2040 – the Spherical Tokamak for Energy Production (STEP).  The project was launched in 2019, and the current government has guaranteed funding for it at the very significant level of £500m a year for five years. 

At a time when many people from different political positions agree that a big problem of the UK state is its inability to deliver big projects, this is a huge investment to build state technological capacity.  

This post is a brief introduction to the STEP project.  Nuclear fusion does generate some reflexive scepticism – we all know the jokes: “it’s twenty years in the future, and always will be”. I want to get beyond that, while still being realistic about the huge challenges this programme faces. I’ll describe some of the technological and engineering issues, and the approaches being proposed to overcome them.

Continue reading “Putting fusion power on the UK grid”

The enduring appeal of superintelligence, superabundance, and eternal life

On More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity, by Adam Becker.

“More Everything Forever” is well-written, scientifically authoritative, and quite fair to the protagonists, who get plenty of space to speak for themselves. But it’s decisive and convincing in its conclusions: some of the richest and most influential men in the world are motivated by a set of beliefs that are frankly unhinged – and it’s on the basis of these beliefs that resources on a huge scale are currently being allocated.

In 2005, I was invited to a “Foresight Vision Weekend” in San Francisco, to talk about my view that nanotechnology should be more like biology, rather than the mechanical engineering-inspired vision of Eric Drexler.  There, I met some of the luminaries of transhumanism – Eric Drexler, Aubrey de Grey, Josh Hall, Ralph Merkle – but despite the general cordiality of my reception, it was uncomfortable to be amongst a large congregation whose belief system I didn’t share. The prevailing view was very much that a brisk engineering approach would soon lead, via Drexlerian nanotechnology, to a world of extraordinary material abundance, in which disease and old age would have been eliminated, and humanity would have merged with, or been surpassed by, intelligent machines.  Accelerating technological change was about to change everything – “The Singularity was Near”, to quote the title of an influential and best-selling book by Ray Kurzweil.  What struck me was the range of participants at the meeting – some were the kind of enthusiasts that some at the time dismissed as “bloggers in their mothers’ basements”, but amongst them were dot com millionaires, senior figures from the military-industrial complex, and the odd congressman.

It’s this world, 20 years on, that Becker describes. The world he describes is now more influential and politically dominant than I could ever have imagined.  Where in 2005, there were dot com millionaires, now there are individuals commanding hundreds of billions of dollars, directly influencing the most powerful government in the world, and where in 2005 there were congressmen, now there is a Vice-President. The “Silicon Valley worldview” is approaching hegemonic status amongst the people who matter … or think they ought to be the only people who matter.  And the Singularity remains at the centre of this world view – what Becker calls “the ideology of technological salvation”.

This ideology comes with variations, but all of them have in common three principles. They assume that all human problems can be reduced to problems of technology, and that solving those technological problems will be immensely profitable. But profit is not enough – an important feature of these worldviews is that they offer their adepts transcendence through technology, allowing them to break through human limitations like ageing and death [1].

Continue reading “The enduring appeal of superintelligence, superabundance, and eternal life”

The Year in Soft Machines

The Soft Machines blog has been going for more than twenty years, I’m astonished to say. It’s good to see a substantial increase in the number of readers in 2025’s later months – no doubt helped by the fact that, with a bit more time on my hands, I’ve been writing a bit more regularly. For the benefit of new readers and old, here’s a review of some of the year’s posts, set in the context of some of this blog’s recurring themes.

The UK’s productivity and economic growth problem

The UK’s continuing economic stagnation remains a continual preoccupation, unfortunately. A recent post presents the most recent data for GDP per capita, showing that the country is around 30% worse off than if the pre-2008 trend had continued. Such a dramatic change in economic fortunes must have a cause – or causes. Stating what should be obvious, but doesn’t seem to be, to many commentators, I insist that the causes must precede the big break in 2008, and that there may be long lags between cause and effect. But one can always make things worse with subsequent bad decisions.

The UK’s continuing economic growth crisis

Fundamentally, our economic problems are problems of productivity growth – or lack of it. I’ve been writing about this for about a decade, with a post from earlier in the year summarising some of the arguments:

Ten Years of Banging on about Productivity

Why does this matter? From the government’s perspective, projections of future productivity growth make a big difference to how much public spending can grow or how much taxes have to rise to keep the government within its fiscal rules. The role of the Office of Budgetary Responsibility in making forecasts is key here, but its record in predicting future productivity growth is frankly risible, as I discussed in the context of the Spring Statement:

Why productivity growth is important – Spring Statement 2025 Edition

Productivity and GDP per capita are technical concepts, so it might be thought that these issues aren’t relevant to people’s everyday lives. Nothing could be further from the truth – the slowdown in productivity is directly reflected in peoples’ earnings, shown dramatically in this plot from:

The End of Wage Growth in the UK

Average real weekly UK wages. Green: Composite Average Weekly Earnings series, corrected for inflation using consumer prices index. Thomas, R and Dimsdale, N (2017) “A Millennium of UK Data”, Bank of England OBRA dataset. Brown: ONS, Real Average Weekly Earnings, total pay, using CPI (seasonally adjusted). 18/2/2025 release.

Everything that’s wrong with politics and economics in the UK can be traced back to stagnating productivity.

Towards economic growth, energy and progress

Is this economic stagnation inevitable? I don’t think so – I believe it to be the result of policy choices the country has made, and different choices are possible. I welcome a growing movement of commentators and think-tanks exploring concrete policy ideas to break the stagnation, though I don’t always agree with their priorities. At the end of last year, I wrote what I hope comes across as a sympathetic critique of one strand of thought –

Taking Anglofuturism Seriously

One theme that is at the centre of much of this kind of writing prioritises cheap, abundant energy, with a new roll-out of nuclear power put centre-stage. I’m in sympathy with this, though I don’t think the analysis of the recent failure of the UK to build new nuclear power stations goes far enough. In 2014, the government planned to build 18 GW of new nuclear power; as I write, none has been delivered, and only 3.2 GW is under construction. Much emphasis is placed on the need to remove regulatory barriers; this in my view is necessary, but not sufficient: more thought needs to be given to how to rebuild national capabilities, as I argue here:

Ownership, Control, National capability: learning lessons from the UK’s nuclear new build debacle

Another recent feature of the UK economy is a rapid decline in the share of the economy accounted for by manufacturing – a decline shared by other developed economies, but which has been particularly large in the UK. Manufacturing now accounts for 8% of UK economy; should we try & increase this? I think so, but it’s important to distinguish some good arguments for this from bad ones (and recognise some uncertainties). Manufacturing matters for its potential for productivity growth – what’s important is the value it creates, not the jobs. Manufacturing capability is also important for national security, but realism is needed about UK’s position as <3% of world high tech economy – we need to aim for security, not autarky.

Good reasons and bad reasons for supporting manufacturing (and some uncertainties) 

On artificial intelligence

Inevitably, I have written about artificial intelligence. I don’t think anyone knows how this story is going to play out, least of all me, so back in May I sketched out three scenarios for the economic impact of AI:

1. Intelligence explosion – the Silicon Valley vision of AI entering a state of recursive self-improvement, leading to artificial general intelligence, and a winner takes all economy, in which the controllers of the new technologies enjoy unprecedented political and economic power.

2. Excel in prose – in which AI is understood as a powerful normal technology, whose applications lead to significant productivity gains across a number of sectors, but with a delay as business processes have to be adapted to make the most of the new technology.

3. Crash and burn – in which the revenues generated by applications of AI are disappointing, and can’t justify the huge capital investments have been made in AI infrastructure. The subsequent bursting of a financial bubble leads to systemic damage to the world financial system and the real economy.

Writing in May, I described “Crash and burn” as a contrarian scenario, but in the last few months it seems to have become mainstream; one can’t open up the Financial Times app without coming across an AI Bubble article.

The economic impact of AI: three scenarios  

One aspect of the AI story that I think has been neglected is the state of the material base that underlies the technology – the integrated circuits that are used to train and run the AI models. For many decades, we came to rely on an exponential increase in computer power, arising from the miniaturisation of the circuit components expressed in Moore’s Law.

Moore’s Law is still evoked by commentators as a symbol of accelerating technological change, but in fact the rate of increase in raw computer power has slowed substantially over the last two decades. Available computer power for applications such as large language models is still increasing, but this increased power is coming, less from miniaturisation, more from software, specialised architectures optimised for particular tasks, and advanced packaging of chips.

  Minimum transistor footprint (product of metal pitch and contacted gate pitch) for successive semiconductor process nodes. Data: (1994 – 2014 inclusive) – Stanford Nanoelectronics Lab, post 2017 and projections, successive editions of the IEEE International Roadmap for Devices and Systems

In the classical heyday of Moore’s Law, from the mid 1980’s to the mid 2000’s, computer power grew at a rate of 50% a year compounded, doubling every two years. In this extraordinary period, there was more than a thousandfold cumulative increase over a couple of decades.

Now, in contrast, it is not the supply of computer power that is increasing exponentially; we have an exponential increase in demand, while the increase in supply has more of a linear character.

Moore’s Law, past and future 

In “AI and the manufacturing firm of the future”, I ask how AI will change ht world of manufacturing. Sam Altman, CEO of OpenAI, has written about a manufacturing singularity, with AGI powered humanoid robots building factories to make more robots. I ask, as politely as I can, whether this vision reflects his lack of understanding of the material base of our industrial world, is a somewhat overheated metaphor, or is just bullshit (in Harry Frankfurt’s sense – i.e. an utterance whose intended effect is uncoupled to any truth value).

An alternative scenario is of AI driving process & system optimisation in increasingly automated factories. If Altman’s vision is driving strategies in the USA, I think the latter scenario is the one being aggressively pursued in China. We’ll see which is closer to reality.

AI and the manufacturing firm of the future 

UK science and university policy

Until my retirement at the end of September this year it was very much part of my day job to think about science and university policy in the UK. UK Universities have been under huge financial pressure in recent years, so some might be tempted to step back from their role in their communities. In this piece I argued that this would be a big mistake, and instead they should take even more seriously their role supporting regional economies.

The civic university in hard times 

The next piece offers a much more personal view of the role of universities in their regions – it’s a retrospective on my time as Vice-President for Regional Innovation and Civic Engagement at the University of Manchester, reviewing the progress we have made working with partners in the city-region to realise the University’s potential to support Greater Manchester’s economy.

On leaving the University of Manchester

Finally, my most popular post of the year was this rather provocative piece: UK Science in a post-liberal world. Here, I argue that a multi-decade period of consensus in UK science policy is likely soon to come to an end, and that the UK’s research system must respond to a new focus on re-building, re-energising, re-arming and re-industrialising for a changed & hostile world.

UK Science in a post-liberal world 

Family matters

To turn to personal matters, my mother, Sheila Jones, died on October 31st this year, a little more than two years after the death of my father, Robbie Jones. I found it helpful to write these two pieces to celebrate their lives, and to reflect on where I have come from.

Sheila Howell Jones (1934 – 2025) ,  Robert Cecil Jones (1932 – 2023) 

On leaving the University of Manchester

This year marked the end of my full-time career as an academic – I retired from the University of Manchester at the end of September 2025. I was a lecturer at Cambridge University from 1989 to 1998, when I moved to the University of Sheffield. I was a professor of physics at Sheffield, and also, between 2009 and 2016, Pro-Vice-Chancellor for Research and Innovation. I moved to the University of Manchester in 2020, where latterly I have had the role of Vice-President for Regional Innovation and Civic Engagement. I was touched and honoured by the kind words spoken about me at an event to mark my retirement in September.  UoM President Duncan Ivison, Manchester City Council Chief Executive Tom Stannard, and the Chair of UoM’s Board of Governors Phillipa Hurd all spoke, and GM Mayor Andy Burnham sent a video message.  In response, I said something along these lines:

Thanks for all your kind words.  I’m conscious that I’ve only been at Manchester for 5 years, in contrast to many of you who have devoted a much longer time to the institution.

My career has taken me from Cornell, through Cambridge, to Sheffield (with quite a lot of time in Swindon, first on secondment to run the cross-council nanotechnology programme, then as EPSRC Council Member), and, as Duncan said, it’s taken a number of twists and turns – I often describe myself as a deviant physicist.  There’s been science – both blue skies and highly collaborative with industry, public engagement, science policy, and contributions to local economic development and attempts to influence national policy.

I think my time at Manchester has been a culmination of that career, where I’ve been able to bring together all those different strands in the service of a great university in a great city.

Continue reading “On leaving the University of Manchester”

The UK’s continuing economic growth crisis

Between 1955 and 2008, GDP per person in the UK grew at an overall rate of 2.3% a year. Periodic booms and the inevitable following recessions produced deviations, but before 2008, growth always returned to this steady trend.  That changed in the global financial crisis.  In the following 18 months, GDP per person fell by 11.4%.  In contrast to all previous recessions, growth never returned to the previous trend line.  Between 2010 and 2019, GDP per person returned to growth at the lower rate of 1.4% a year.  The shock of the Covid epidemic resulted in large parts of the economy being shut down, since when there has been a sputtering recovery.  

But the bottom line is that GDP per person is now £16,500 – 29% – lower than it would have been if the 1955-2008 trend had continued.

UK GDP per person, 1955-2025

GDP per person for the UK, in real terms (reference year 2023).  Quarterly data from the ONS, annualised, 13/11/2025 release.

GDP measures the total value of goods and services produced by the economy, that value to be shared between wages and returns to owners of capital. So GDP per capita is a good measure of how much of that value is available for an individual citizen, through the wages they earn, the return on their investments, and the public services that are available to them.  Of course, not everyone gets an equal share. There is considerable inequality in incomes, though in recent years this has been relatively stable.  Wealth represents claims on future GDP, and here the inequality is considerably greater than for incomes, and has been substantially increasing.

Continue reading “The UK’s continuing economic growth crisis”