Do materials even have genomes?

I’ve long suspected that physical scientists have occasional attacks of biology envy, so I suppose I shouldn’t be surprised that the US government announced last year the “Materials Genome Initiative for Global Competiveness”. Its aim is to “discover, develop, manufacture, and deploy advanced materials at least twice as fast as possible today, at a fraction of the cost.” There’s a genuine problem here – for people used to the rapid pace of innovation in information technology, the very slow rate at which new materials are taken up in new manufactured products is an affront. The solution proposed here is to use those very advances in information technology to boost the rate of materials innovation, just as (the rhetoric invites us to infer) the rate of progress in biology has been boosted by big data driven projects like the human genome project.

There’s no question that many big problems could be addressed by new materials. Building cars and aeroplanes from lighter and stronger materials, like carbon fibre composites, will reduce the energy intensity of transport. Electric cars won’t get very far unless we have better batteries – with much higher energy densities at lower cost – and this needs better materials. Better materials will be needed if we are to harness the energy from the sun on a useful scale, through better photovoltaics and artificial photosynthesis. Looking further ahead, what will stop us from exploiting nuclear fusion on the earth through the scale up of fusion reactors such as ITER isn’t so much that we don’t understand enough nuclear physics, or even that we can’t control plasmas, it will be that we don’t have structural materials able to withstand the heat and radiation damage of a commercially useful fusion power reactor.

So what slows down the pace of materials innovation? One factor is that some of the sectors most in need of materials innovation are very highly regulated – for example civil aerospace and medicine. Some ideologues identify this regulation as the problem. I don’t think it would take many passenger aeroplanes falling out of the sky because of an unexpected bug in the way a new material behaves to discredit this point of view, though. A more fundamental problem is that when a new material is discovered, even if it potentially offers much higher performance than existing materials, it often can’t be fully exploited until people have worked out how to manufacture things with it. There’s a nice historical example from Sheffield – when Benjamin Huntsman invented the first process for mass-producing high quality steel around 1750, the local cutlery industries refused at first to take up the new material, because it was harder than traditional steel and more difficult to weld. Nowadays the replacement of aerospace alloys by carbon fiber composites has been slow; the natural anisotropy of composites makes parts harder to design, the most efficient manufacturing processes are quite different to the machining processes familiar for metals, and while glued joints can be very strong quality control can be difficult to guarantee.

The proposed solution from the USA is “integrated computational materials engineering”, as described in a National Research Council report of that name. The idea is to move materials science out of the real world into the virtual world of computer simulation, integrating research relevant for all stages of the use of a material, including processing, manufacturing, use and recycling. So rather than waiting to make the material before trying out how to manufacture something from it, the idea is that you do these things in parallel, on a computer, invoking the ideas of big data and open innovation.

This all sounds very timely and convincing. But there are some very fundamental difficulties that make this much harder than it sounds. Even with the fastest computers, you can’t simulate the behaviour of a piece of metal by modelling what the atoms are doing in it – there’s just too big a spread of relevant length and timescales. If you wanted to study the way different atoms cluster together as you cast an alloy, you need to be concerned with picosecond times and nanometer lengths, but then if you want to see what happens to a turbine blade made of it in an operating jet engine, you’re interested in meter lengths and timescales of days and years (it is the slow changes in dimension and shape of materials in use – their creep – that often limits their lifetime in high temperature situations). What’s needed to bridge this vast range of length and timescales is modelling the system at a hierarchy of different levels. For the small lengths and times you are interested in what the atoms themselves do; for larger scales there are coarser grained phenomena – dislocations and grain boundaries, for example – that you might use as the basis of the simulation, while at even larger scales you can construct continuum mathematical models. The art of doing this right involves connecting all the levels, so that you derive the properties of one level from the behaviour you simulate from the smaller and faster level, right down to the atoms (or indeed to the quantum mechanics that characterises the way atoms interact).

This kind of multiscale modelling isn’t new – in the context of the behaviour of plastics, for example, this was exactly the aim of Masao Doi’s Octa project. To get it right one needs a very good understanding of the physics of the materials at the various length-scales, protocols for taking the results at the lower levels and converting them into the parameters that feed into the higher level models, and above all an extensive program of experimental checking of the predictions at all levels. One key question is how generic this process can be – how big are the classes of materials for which the basic physics looks the same? Within such a class, predictions for different materials would be available by simple changes of parameters, as opposed to having to start from scratch with a completely different set of models. For example, we have good reason to suppose that one set of models work for all single-component, non-crystalline, linear chain polymers, but to extend this to polymer blends and composite materials would introduce very substantial new complexities. Likewise it might be possible to find some general models that could be applied to the class of nanoparticle reinforced alloys. Nonetheless, I’m sceptical that anyone trying to test out how to shape and weld big structures out of an oxide dispersion strengthened steel (these steels, reinforced with 2 nm nanoparticles of yttrium oxide, are candidate materials for fusion and 4th generation fission reactors, due to their creep resistance and resistance to radiation damage) without getting someone to make a big enough batch to try it out.

What’s all this got to do with the genome? Students of British journalism know that if ever you see a headline in the form of a question, you know the answer is no. The idea of the Human Genome Project had a degree of rhetorical force because of the idea that underlying all of life was a digital key – the genome – that if decoded would answer many of the unanswered questions of biology. I don’t think it is possible to argue that there is any sort of convincing analogy in the more general world of materials. Clearly at some level it is true to say that the behaviour of any material is determined by what the atoms are doing, and I’d certainly be the first to agree that computational materials science is well worth devoting some effort to. But the genome reference seems to me to be a rhetorical flourish too far.

4 thoughts on “Do materials even have genomes?”

  1. Like in the old nanotechnology debate, it’s all about what (composable) abstractions physics allows us to make, and of course, finding them. Too much focus on the engineer hat, and not enough on the physicist hat, and you get leaky abstractions that just don’t work.

  2. First, it is encouraging to see a lucid commentary on the MGI. I’ve been troubled by this since announcement too.

    Second, of course, materials don’t have genomes–you can take a given chemical composition, say 1Si + 2O = SiO2, and depending how it is created and processed, derive a large number of different microstructures of various scales and defect content and character, and thereby with very different properties and resulting cost structure. However, that said, if one takes a specific initial set of constituents, and performs the same process sequence time after time on the starting set, one will arrive at the same microstructure and properties suite.

    There are analogues between how biological and inorganic materials assemble. But before we take them too far, we need to find a common language to describe materials–something that captures all relevant scales from electronic bonding to microstructural. A great pile of diffractograms, chemical spectra and photomicrographs won’t do it. They need to be integrated across all modes. It’s a good problem. And when it is solved, we might actually start speaking intelligently about fundamental data units that describe materials uniquely, and building generically useful models of development and behavior.

  3. James, the analogies between inorganic and biological materials are very interesting, but they’re revealing as much in their differences as in their similarities. Both are (almost always) non-equilibrium structures, but inorganic ones are usually generated by taking a system a long way from equilibrium, allowing it to progress towards equilibrium and arresting it before it gets there, while biological ones are produced by keeping the system constantly away from equilibrium by a constant driving input of free energy. But they do share the hierarchy problem, which as you rightly say leads to a serious (but interesting) problem of description.

  4. My understanding comes very much from the atomic side of things but I think even there a materials genome is still a way away, as James points out in general.
    The idea of a genome relies on the concept that composition dictates properties in some fundamental sense. For biological structures (I’m no biologist so I may be wrong), although composition in an average sense doesn’t directly dictate properties, a protein’s composition does seem to fairly uniquely determine a protein’s structure. Firstly, ‘composition’ for a protein is an ordered list – where as a stoichiometry is very much non-ordered. Secondly, evolution has selected for proteins that can be relied on to form a single kind of structure*. Thirdly, the synthetic conditions are both favourable, e.g. perhaps there are chaperone molecules aiding the formation of the correct folded form, and fixed, i.e. you don’t need to consider anything other than those inside the cell that the protein comes from (after all, we aren’t making de-novo proteins, just tweaking those that already exist).

    For materials, this assumption that composition->structure->properties cannot be made as forcefully. After all, for many inorganic materials, even composition is something that can require great effort to be designed.

    The original materials (genome) project, http://www.materialsproject.org, I think has some interest to it, despite neglecting most of the issues you deal with above. I think the idea that we can assess a-priori all simple ternary inorganic compounds for their viability in various applications is pretty remarkable. It is not a complete solution to this materials genome problem, but it does make a step along the right path.

    *I’m sure exceptions exist, but they are just that.

Comments are closed.