I’ve long suspected that physical scientists have occasional attacks of biology envy, so I suppose I shouldn’t be surprised that the US government announced last year the “Materials Genome Initiative for Global Competiveness”. Its aim is to “discover, develop, manufacture, and deploy advanced materials at least twice as fast as possible today, at a fraction of the cost.” There’s a genuine problem here – for people used to the rapid pace of innovation in information technology, the very slow rate at which new materials are taken up in new manufactured products is an affront. The solution proposed here is to use those very advances in information technology to boost the rate of materials innovation, just as (the rhetoric invites us to infer) the rate of progress in biology has been boosted by big data driven projects like the human genome project.
There’s no question that many big problems could be addressed by new materials. Building cars and aeroplanes from lighter and stronger materials, like carbon fibre composites, will reduce the energy intensity of transport. Electric cars won’t get very far unless we have better batteries – with much higher energy densities at lower cost – and this needs better materials. Better materials will be needed if we are to harness the energy from the sun on a useful scale, through better photovoltaics and artificial photosynthesis. Looking further ahead, what will stop us from exploiting nuclear fusion on the earth through the scale up of fusion reactors such as ITER isn’t so much that we don’t understand enough nuclear physics, or even that we can’t control plasmas, it will be that we don’t have structural materials able to withstand the heat and radiation damage of a commercially useful fusion power reactor.
So what slows down the pace of materials innovation? One factor is that some of the sectors most in need of materials innovation are very highly regulated – for example civil aerospace and medicine. Some ideologues identify this regulation as the problem. I don’t think it would take many passenger aeroplanes falling out of the sky because of an unexpected bug in the way a new material behaves to discredit this point of view, though. A more fundamental problem is that when a new material is discovered, even if it potentially offers much higher performance than existing materials, it often can’t be fully exploited until people have worked out how to manufacture things with it. There’s a nice historical example from Sheffield – when Benjamin Huntsman invented the first process for mass-producing high quality steel around 1750, the local cutlery industries refused at first to take up the new material, because it was harder than traditional steel and more difficult to weld. Nowadays the replacement of aerospace alloys by carbon fiber composites has been slow; the natural anisotropy of composites makes parts harder to design, the most efficient manufacturing processes are quite different to the machining processes familiar for metals, and while glued joints can be very strong quality control can be difficult to guarantee.
The proposed solution from the USA is “integrated computational materials engineering”, as described in a National Research Council report of that name. The idea is to move materials science out of the real world into the virtual world of computer simulation, integrating research relevant for all stages of the use of a material, including processing, manufacturing, use and recycling. So rather than waiting to make the material before trying out how to manufacture something from it, the idea is that you do these things in parallel, on a computer, invoking the ideas of big data and open innovation.
This all sounds very timely and convincing. But there are some very fundamental difficulties that make this much harder than it sounds. Even with the fastest computers, you can’t simulate the behaviour of a piece of metal by modelling what the atoms are doing in it – there’s just too big a spread of relevant length and timescales. If you wanted to study the way different atoms cluster together as you cast an alloy, you need to be concerned with picosecond times and nanometer lengths, but then if you want to see what happens to a turbine blade made of it in an operating jet engine, you’re interested in meter lengths and timescales of days and years (it is the slow changes in dimension and shape of materials in use – their creep – that often limits their lifetime in high temperature situations). What’s needed to bridge this vast range of length and timescales is modelling the system at a hierarchy of different levels. For the small lengths and times you are interested in what the atoms themselves do; for larger scales there are coarser grained phenomena – dislocations and grain boundaries, for example – that you might use as the basis of the simulation, while at even larger scales you can construct continuum mathematical models. The art of doing this right involves connecting all the levels, so that you derive the properties of one level from the behaviour you simulate from the smaller and faster level, right down to the atoms (or indeed to the quantum mechanics that characterises the way atoms interact).
This kind of multiscale modelling isn’t new – in the context of the behaviour of plastics, for example, this was exactly the aim of Masao Doi’s Octa project. To get it right one needs a very good understanding of the physics of the materials at the various length-scales, protocols for taking the results at the lower levels and converting them into the parameters that feed into the higher level models, and above all an extensive program of experimental checking of the predictions at all levels. One key question is how generic this process can be – how big are the classes of materials for which the basic physics looks the same? Within such a class, predictions for different materials would be available by simple changes of parameters, as opposed to having to start from scratch with a completely different set of models. For example, we have good reason to suppose that one set of models work for all single-component, non-crystalline, linear chain polymers, but to extend this to polymer blends and composite materials would introduce very substantial new complexities. Likewise it might be possible to find some general models that could be applied to the class of nanoparticle reinforced alloys. Nonetheless, I’m sceptical that anyone trying to test out how to shape and weld big structures out of an oxide dispersion strengthened steel (these steels, reinforced with 2 nm nanoparticles of yttrium oxide, are candidate materials for fusion and 4th generation fission reactors, due to their creep resistance and resistance to radiation damage) without getting someone to make a big enough batch to try it out.
What’s all this got to do with the genome? Students of British journalism know that if ever you see a headline in the form of a question, you know the answer is no. The idea of the Human Genome Project had a degree of rhetorical force because of the idea that underlying all of life was a digital key – the genome – that if decoded would answer many of the unanswered questions of biology. I don’t think it is possible to argue that there is any sort of convincing analogy in the more general world of materials. Clearly at some level it is true to say that the behaviour of any material is determined by what the atoms are doing, and I’d certainly be the first to agree that computational materials science is well worth devoting some effort to. But the genome reference seems to me to be a rhetorical flourish too far.