Economics after Moore’s Law

One of the dominating features of the economy over the last fifty years has been Moore’s law, which has led to exponential growth in computing power and exponential drops in its costs. This period is now coming to an end. This doesn’t mean that technological progress in computing will stop dead, nor that innovation in ICT comes to an end, but it is a pivotal change, and I’m surprised that we’re not seeing more discussion of its economic implications.

This reflects, perhaps, the degree to which some economists seem to be both ill-informed and incurious about the material and technical basis of technological innovation (for a very prominent example, see my review of Robert Gordon’s recent widely read book, The Rise and Fall of American Growth). On the other hand, boosters of the idea of accelerating change are happy to accept it as axiomatic that these technological advances will always continue at the same, or faster rates. Of course, the future is deeply uncertain, and I am not going to attempt to make many predictions. But here’s my attempt to identify some of the issues

How we got here

The era of Moore’s law began with the invention in 1959 of the integrated circuit. Transistors are the basic unit of electronic unit, and in an integrated circuit many transistors could be incorporated on a single component to make a functional device. As the technology for making integrated circuits rapidly improved, Gordon Moore, in 1965 predicted that the number of transistors on a single silicon chip would double every year (the doubling time was later revised to 18 months, but in this form the “law” has well described the products of the semiconductor industry ever since).

The full potential of integrated circuits was realised when, in effect, a complete computer was built on a single chip of silicon – a microprocessor. The first microprocessor was made in 1970, to serve as the flight control computer for the F14 Tomcat. Shortly afterwards a civilian microprocessor was released by Intel – the 4004. This was followed in 1974 by the Intel 8080 and its competitors, which were the devices that launched the personal computer revolution.

The Intel 8080 had transistors with a minimum feature size of 6 µm. Moore’s law was driven by a steady reduction in this feature size – by 2000, Intel’s Pentium 4’s transistors were more than 30 times smaller. This drove the huge increase in computer power between the two chips in two ways. Obviously, more transistors gives you more logic gates, and more is better. Less obviously, another regularity known as Dennard scaling states that as transistor dimensions are shrunk, each transistor operates faster and uses less power. The combination of Moore’s law and Dennard scaling was what led to the golden age of microprocessors, from the mid-1990’s, where every two years a new generation of technology would be introduced, each one giving computers that were cheaper and faster than the last.

This golden age began to break down around 2004. Transistors were still shrinking, but the first physical limit was encountered. Further increases in speed became impossible to sustain, because the processors simply ran too hot. To get round this, a new strategy was introduced – the introduction of multiple cores. The transistors weren’t getting much faster, but more computer power came from having more of them – at the cost of some software complexity. This marked a break in the curve of improvement of computer power with time, as shown in the figure below.


Computer performance trends as measured by the SPECfp2000 standard for floating point performance, normalised to a typical 1985 value. This shows an exponential growth in computer power from 1985 to 2004 at a compound annual rate exceeding 50%, and slower growth between 2004 and 2010. From “The Future of Computing Performance: Game Over or Next Level?”, National Academies Press 2011″

In this period, transistor dimensions were still shrinking, even if they weren’t becoming faster, and the cost per transistor was still going down. But as dimensions shrunk to tens of nanometers, chip designers ran out of space, and further increases in density were only possible by moving into the third dimension. The “FinFET” design, introduced in 2011 essentially stood the transistors on their side. At this point the reduction in cost per transistor began to level off, and since then the development cycle has begun to slow, with Intel announcing a move from a two year cycle to one of three years.

The cost of sustaining Moore’s law can be measured in diminishing returns from R&D efforts (estimated by Bloom et al as a roughly 8-fold increase in research effort, measured as R&D expenditure deflated by researcher salaries, from 1995 to 2015), and above all by rocketing capital costs.

Oligopoly concentration

The cost of the most advanced semiconductor factories (fabs) now exceeds $10 billion, with individual tools approaching $100 million. This rocketing cost of entry means that now only four companies in the world have the capacity to make semiconductor chips at the technological leading edge.

These firms are Intel (USA), Samsung (Korea), TSMC (Taiwan) and Global Foundries (USA/Singapore based, but owned by the Abu Dhabi sovereign wealth fund). Other important names in semiconductors are now “fabless” – they design chips, that are then manufactured in fabs operated by one of these four. These fabless firms include nVidia – famous for the graphical processing units that have been so important for computer games, but which are now becoming important for the high performance computing needed for AI and machine learning, and ARM (until recently UK based and owned, but recently bought by Japan’s SoftBank), designer of low power CPUs for mobile devices.

It’s not clear to me how the landscape evolves from here. Will there be further consolidation? Or in an an environment of increasing economic nationalism, will ambitious nations regard advanced semiconductor manufacture as a necessary sovereign capability, to be acquired even in the teeth of pure economic logic? Of course, I’m thinking mostly of China in this context – its government has a clearly stated policy of attaining technological leadership in advanced semiconductor manufacturing.

Cheap as chips

The flip-side of diminishing returns and slowing development cycles at the technological leading edge is that it will make sense to keep those fabs making less advanced devices in production for longer. And since so much of the cost of an IC is essentially the amortised cost of capital, once that is written off the marginal cost of making more chips in an old fab is small. So we can expect the cost of trailing edge microprocessors to fall precipitously. This provides the economic driving force for the idea of the “internet of things”. Essentially, it will be possible to provide a degree of product differentiation by introducing logic circuits into all sorts of artefacts – putting a microprocessor in every toaster, in other words.

Although there are applications where cheap embedded computing power can be very valuable, I’m not sure this is a universally good idea. There is a danger that we will accept relatively marginal benefits (the ability to switch our home lights on with our smart-phones, for example) at the price of some costs that may not be immediately obvious. There will be a general loss of transparency and robustness of everyday technologies, and the potential for some insidious potential harms, through vulnerability to hostile cyberattacks, for example. Caution is required!

Travelling without a roadmap

Another important feature of the golden age of Moore’s law and Dennard scaling was a social innovation – the International Technology Roadmap for Semiconductors. This was an important (and I think unique) device for coordinating and setting the pace for innovation across a widely dispersed industry, comprising equipment suppliers, semiconductor manufacturers, and systems integrators. The relentless cycle of improvement demanded R&D in all sorts of areas – the materials science of the semiconductors, insulators and metals and their interfaces, the chemistry of resists, the optics underlying the lithography process – and this R&D needed to be started not in time for the next upgrade, but many years in advance of when it was anticipated it would be needed. Meanwhile businesses could plan products that wouldn’t be viable with the computer power available at that time, but which could be expected in the future.

Moore’s law was a self-fulfilling prophecy, and the ITRS was the document that both predicted the future, and made sure that that future happened. I write this in the past tense, because there will be no more roadmaps. Changing industry conditions – especially the concentration of leading edge manufacturing – has brought this phase to an end, and the last International Technology Roadmap for Semiconductors was issued in 2015.

What does all this mean for the broader economy?

The impact of fifty years of exponential technological progress in computing seems obvious, yet quantifying its contribution to the economy is more difficult. In developed countries, the information and communication technology sector has itself been a major part of the economy which has demonstrated very fast productivity growth. In fact, the rapidity of technological change has itself made the measurement of economic growth more difficult, with problems arising in accounting for the huge increases in quality at a given price for personal computers, and the introduction of entirely new devices such as smartphones.

But the effects of these technological advances on the rest of the economy must surely be even larger than the direct contribution of the ICT sector. Indeed, even countries without a significant ICT industry of their own must also have benefitted from these advances. The classical theory of economic growth due to Solow can’t deal with this, as it isn’t able to handle a situation in which different areas of technology are advancing at very different rates (a situation which has been universal since at least the industrial revolution).

One attempt to deal with this was made by Oulton, who used a two-sector model to take into account the the effect of improved ICT technology in other sectors, by increasing the cost-effectiveness of ICT related capital investment in those sectors. This does allow one to make some account for the broader impact of improvements in ICT, but I still don’t think it handles the changes in relative value over time that different rates of technological improvement imply. Nonetheless, it allows one to argue for substantial contributions to economic growth from these developments.

Have we got the power?

I want to conclude with two questions for the future. I’ve already discussed the power consumption – and dissipation – of microprocessors in the context of the mid-2000’s end of Dennard scaling. Any user of a modern laptop is conscious of how much heat they generate. Aggregating the power demands of all the computing devices in the world produces a total that is a significant fraction of total energy use, and which is growing fast.

The plot below shows an estimate for the total world power consumption of ICT. This is highly approximate (and as far as the current situation goes, it looks, if anything, somewhat conservative). But it does make clear that the current trajectory is unsustainable in the context of the need to cut carbon emissions dramatically over the coming decades.


Estimated total world energy consumption for information and communication technology. From Rebooting the IT Revolution: a call to action – Semiconductor Industry Association, 2015

These rising power demands aren’t driven by more lap-tops – its the rising demands of the data centres that power the “cloud”. As smart phones became ubiquitous, we’ve seen the computing and data storage that they need move from the devices themselves, limited as they are by power consumption, to the cloud. A service like Apple’s Siri relies on technologies of natural language processing and machine learning that are much too computer intensive for the processor in the phone, and instead are run on the vast banks of microprocessors in one of Apple’s data centres.

The energy consumption of these data centres is huge and growing. By 2030, a single data centre is expected to be using 2000 MkWh per year, of which 500 MkWh is needed for cooling alone. This amounts to a power consumption of around 0.2 GW, a substantial fraction of the output of a large power station. Computer power is starting to look a little like aluminium, something that is exported from regions where electricity is cheap (and hopefully low carbon in origin). However there are limits to this concentration of computer power – the physical limit on the speed of information transfer imposed by the speed of light is significant, and the volume of information is limited by available bandwidth (especially for wireless access).

The other question is what we need that computing power for. Much of the driving force for increased computing power in recent years has been from gaming – the power needed to simulate and render realistic virtual worlds that has driven the development of powerful graphics processing units. Now it is the demands of artificial intelligence and machine learning that are straining current capacity. Truly autonomous systems, like self-driving cars, will need stupendous amounts of computer power, and presumably for true autonomy much of this computing will need to be done locally rather than in the cloud. I don’t know how big this challenge is.

Where do we go from here?

In the near term, Moore’s law is good for another few cycles of shrinkage, moving more into the third dimension by stacking increasing numbers of layers vertically, shrinking dimensions further by using extreme UV for lithography. How far can this take us? The technical problems of EUV are substantial, and have already absorbed major R&D investments. The current approaches for multiplying transistors will reach their end-point, whether killed by technical or economic problems, perhaps within the next decade.

Other physical substrates for computing are possible and are the subject of R&D at the moment, but none yet has a clear pathway for implementation. Quantum computing excites physicists, but we’re still some way from a manufacturable and useful device for general purpose computing.

There is one cause for optimism, though, which relates to energy consumption. There is a physical lower limit on how much energy it takes to carry out a computation – the Landauer limit. The plot above shows that our current technology for computing consumes energy at a rate which is many orders of magnitude greater than this theoretical limit (and for that matter, it is much more energy intensive than biological computing). There is huge room for improvement – the only question is whether we can deploy R&D resources to pursue this goal on the scale that’s gone into computing as we know it today.

See also Has Moore’s Law been repealed? An economist’s perspective, by Keith Flamm, in Computing in Science and Engineering, 2017

Did the government build the iPhone? Would the iPhone have happened without governments?

The iPhone must be one of the most instantly recognisable symbols of the modern “tech economy”. So, it was an astute choice by Mariana Mazzacuto to put it at the centre of her argument about the importance of governments in driving the development of technology. Mazzacuto’s book – The Entrepreneurial State – argues that technologies like the iPhone depended on the ability and willingness of governments to take on technological risks that the private sector is not prepared to assume. She notes also that it is that same private sector which captures the rewards of the government’s risk taking. The argument is a powerful corrective to the libertarian tendencies and the glorification of the free market that is particularly associated with Silicon Valley.

Her argument could, though, be caricatured as saying that the government built the iPhone. But to put it this way would be taking the argument much too far – the contributions, not just of Apple, but of many other companies in a worldwide supply chain that have developed the technologies that the iPhone integrates, are enormous. The iPhone was made possible by the power of private sector R&D, the majority of it not in fact done by Apple, but by many companies around the world, companies that most people have probably not even heard of.

And yet, this private sector R&D was indeed encouraged, driven, and indeed sometimes funded outright, by government (in fact, more than one government – although the USA has had a major role, other governments have played their parts too in creating Apple’s global supply chain). It drew on many results from publicly funded research, in Universities and public research institutes around the world.

So, while it isn’t true to say the government built the iPhone, what is true is to say that the iPhone would not have happened without governments. We need to understand better the ways government and the private sector interact to drive innovation forward, not just to get a truer picture of where the iPhone came from, but in order to make sure we continue to get the technological innovations we want and need.

Integrating technologies is important, but innovation in manufacturing matters too

The iPhone (and the modern smartphone more generally) is, truly, an awe-inspiring integration of many different technologies. It’s a powerful computer, with an elegant and easy to use interface, it’s a mobile phone which connects to the sophisticated, computer driven infrastructure that constitutes the worldwide cellular telephone system, and through that wireless data infrastructure it provides an interface to powerful computers and databases worldwide. Many of the new applications of smartphones (as enablers, for example, of the so-called “sharing economy”) depend on the package of powerful sensors they carry – to infer its location (the GPS unit), to determine what is happening to it physically (the accelerometers), and to record images of its surroundings (the camera sensor).

Mazzacuto’s book traces back the origins of some of the technologies behind the iPod, like the hard drive and the touch screen, to government funded work. This is all helpful and salutary to remember, though I think there are two points that are underplayed in this argument.

Firstly, I do think that the role of Apple itself (and its competitors), in integrating many technologies into a coherent design supported by usable software, shouldn’t be underestimated – though it’s clear that Apple in particular has been enormously successful in finding the position that extracts maximum value from physical technologies that have been developed by others.

Secondly, when it comes to those physical technologies, one mustn’t underestimate the effort that needs to go in to turn an initial discovery into a manufacturable product. A physical technology – like a device to store or display information – is not truly a technology until it can be manufactured. To take an initial concept from an academic discovery or a foundational patent to the point at which one has a a working, scalable manufacturing process involves a huge amount of further innovation. This process is expensive and risky, and the private sector has often proved unwilling to bear these costs and risks without support from the state, in one form or another. The history of some of the many technologies that are integrated in devices like the iPhone illustrate the complexities of developing technologies to the point of mass manufacture, and show how the roles of governments and the private sector have been closely intertwined.

For example, the ultraminiaturised hard disk drive that made the original iPod possible (now largely superseded by cheaper, bigger, flash memory chips) did indeed, as pointed out by Mazzucato, depend on the Nobel prize-winning discovery by Albert Fert and Peter Grünberg of the phenomenon of giant magnetoresistance. This is a fascinating and elegant piece of physics, which suggested a new way of detecting magnetic fields with great sensitivity. But to take this piece of physics and devise a way of using it in practise to create smaller, higher capacity hard disk drives, as Stuart Parkin’s group at IBM’s Almaden Laboratory did, was arguably just as significant a contribution.

How liquid crystal displays were developed

The story of the liquid crystal display is even more complicated. Continue reading “Did the government build the iPhone? Would the iPhone have happened without governments?”

Why isn’t the UK the centre of the organic electronics industry?

In February 1989, Jeremy Burroughes, at that time a postdoc in the research group of Richard Friend and Donal Bradley at Cambridge, noticed that a diode structure he’d made from the semiconducting polymer PPV glowed when a current was passed through it. This wasn’t the first time that interesting optoelectronic properties had been observed in an organic semiconductor, but it’s fair to say that it was the resulting Nature paper, which has now been cited more than 8000 times, that really launched the field of organic electronics. The company that they founded to exploit this discovery, Cambridge Display Technology, was floated on the NASDAQ in 2004 at a valuation of $230 million. Now organic electronics is becoming mainstream; a popular mobile phone, the Samsung Galaxy S, has an organic light emitting diode screen, and further mass market products are expected in the next few years. But these products will be made in factories in Japan, Korea and Taiwan; Cambridge Display Technology is now a wholly owned subsidiary of the Japanese chemical company Sumitomo. How is it that despite an apparently insurmountable academic lead in the field, and a successful history of University spin-outs, that the UK is likely to end up at best a peripheral player in this new industry? Continue reading “Why isn’t the UK the centre of the organic electronics industry?”

A billion dollar nanotech spinout?

The Oxford University spin-out Oxford Nanopore Technologies created a stir last month by announcing that it would be bringing to market this year systems to read out the sequence of individual DNA molecules by threading them through nanopores. It’s claimed that this will allow a complete human genome to be sequenced in about 15 minutes for a few thousand dollars; the company also is introducing a cheap, disposable sequencer which will sell for less that $900. Speculation has now begun about the future of the company, with valuations of $1-2 billion dollars being discussed if they decide to take the company public in the next 18 months.

It’s taken a while for this idea of sequencing a single DNA molecule by directly reading out its bases to come to fruition. The original idea came from David Deamer and Harvard’s Dan Branton in the mid-1990s; from Hagen Bayley, in Oxford, came the idea of using an engineered derivative of a natural pore-forming protein to form the hole through which the DNA is threaded. I’ve previously reported progress towards this goal here, in 2005, and in more detail here, in 2007. The Oxford Nanopore announcement gives us some clues as to the key developments since then. The working system uses a polymer membrane, rather than a lipid bilayer, to carry the pore array, which undoubtedly makes the system much more robust. The pore is still created from a pore forming protein, though this has been genetically engineered to give greater discrimination between different combinations of bases as the DNA is threaded through the hole. And, perhaps most importantly, an enzyme is used to grab DNA molecules from solution and feed them through the pore. In practise, the system will be sold as a set of modular units containing the electronics and interface, together with consumables cartridges, presumably including the nanopore arrays and the enzymes. The idea is to take single molecule analysis beyond DNA to include RNA and proteins, as well as various small molecules, with a different cartridge being available for each type of experiment. This will depend on the success of their program to develop a whole family of different pores able to discriminate between different types of molecules.

What will the impact of this development be, if everything works as well as is being suggested? (The prudent commentator should stress the if here, as we haven’t yet seen any independent trials of the technology). Much has already been written about the implications of cheap – less than $1000 – sequencing of the human genome, but I can’t help wondering whether this may not actually be the big story here. And in any case, that goal may end being reached with or without Oxford Nanopore, as this recent Nature News article makes clear. We still don’t know whether the Oxford Nanopore technique will be yet competitive on accuracy and price with the other contending approaches. I wonder, though, whether we are seeing here something from the classic playbook for a disruptive innovation. The $900 device in particular looks like it’s intended to create new markets for cheap, quick and dirty sequencing, to provide an income stream while the technology is improved further – with better, more selective pores and better membranes (inevitably, perhaps, Branton’s group at Harvard reported using graphene membranes for threading DNA in Nature last year). As computers continue to get faster, cheaper and more powerful, the technology will automatically benefit from these advances too – fragmentary and perhaps imperfect sequence information has much greater value in the context of vast existing sequence libraries and the data processing power to use them. Perhaps applications for this will be found in forensic and environmental science, diagnostics, microbiology and synthetic biology. The emphasis on molecules other than DNA is interesting too; single molecule identification and sequencing of RNA opens up the possibility of rapidly identifying what genes are being transcribed in a cell at a given moment (the so-called “transcriptome”).

The impact on the investment markets for nanotechnology is likely to be substantial. Existing commercialisation efforts around nanotechnology have been disappointing so far, but a company success on the scale now being talked about would undoubtedly attract more money into the area – perhaps it might also persuade some of the companies currently sitting on huge piles of cash that they might usefully invest some of this in a little more research and development. What’s significant about Oxford Nanopore is that it is operating in a sweet spot between the mundane and the far-fetched. It’s not a nanomaterials company, essentially competing in relatively low margin speciality chemicals, nor is it trying to make a nanofactory or nanoscale submarine or one of the other more radical visions of the nanofuturists. Instead, it’s using the lessons of biology – and indeed some of the components of molecular biology – to create a functional device that operates on the true single molecule level to fill real market needs. It also seems to be displaying a commendable determination to capture all the value of its inventions, rather than licensing its IP to other, bigger companies.

Finally, not the least of the impacts of a commercial and technological success on the scale being talked about would be on nanotechnology itself as a discipline. In the last few years the field’s early excitement has been diluted by a sense of unfulfilled promise, especially, perhaps, in the UK; last year I asked “Why has the UK given up on nanotechnology?” Perhaps it will turn out that some of that disillusionment was premature.

Good capitalism, bad capitalism and turning science into economic benefit

Why isn’t the UK more successful at converting its excellent science into wealth creating businesses? This is a perennial question – and one that’s driven all sorts of initiatives to get universities to handle their intellectual property better, to develop closer partnerships with the private sector and to create more spinout companies. Perhaps UK universities shied away from such activities thirty years ago, but that’s not the case now. In my own university, Sheffield, we have some very successful and high profile activities in partnership with companies, such as our Advanced Manufacturing Research Centre with Boeing, shortly to be expanded as part of an Advanced Manufacturing Institute with heavy involvement from Rolls Royce and other companies. Like many universities, we have some interesting spinouts of our own. And yet, while the UK produces many small high tech companies, we just don’t seem to be able to grow those companies to a scale where they’d make a serious difference to jobs and economic growth. To take just one example, the Royal Society’s Scientific Century report highlighted Plastic Logic, a company based on great research from Richard Friend and Henning Sirringhaus from Cambridge University making flexible displays for applications like e-book readers. It’s a great success story for Cambridge, but the picture for the UK economy is less positive. The company’s Head Office is in California, its first factory was in Leipzig and its major manufacturing facility will be in Russia – the latter fact not unrelated to the fact that the Russian agency Rusnano invested $150 million in the company earlier this year.

This seems to reflect a general problem – why aren’t UK based investors more willing to put money into small technology based companies to allow them to grow? Again, this is something people have talked about for a long time, and there’ve been a number of more or less (usually less) successful government interventions to address the issue. Only the latest of these was announced at the Conservative party conference speech by the Chancellor of the Exchequer, George Osborne – “credit easing” to “help solve that age old problem in Britain: not enough long term investment in small business and enterprise.”

But it’s not as if there isn’t any money in the UK to be invested – so the question to ask isn’t why money isn’t invested in high tech businesses, it is why money is invested in other places instead. The answer must be simple – because those other opportunities offer higher returns, at lower risk, on shorter timescales. The problem is that many of these opportunities don’t support productive entrepreneurship, which brings new products and services to people who need them and generates new jobs. Instead, to use a distinction introduced by economist William Baumol (see, for example, his article Entrepreneurship: Productive, Unproductive, and Destructive, PDF), they support unproductive entrepreneurship, which exploits suboptimal reward structures in an economy to make profits without generating real value. Examples of this kind of activity might include restructuring companies to maximise tax evasion, speculating in financial and property markets when the downside risk is shouldered by the government, exploiting privatisations and public/private partnerships that have been structured to the disadvantage of the tax-payer, and generating capital gains which result from changes in planning and tax law.

Most criticism of this kind of bad capitalism focuses on issues of fairness and equity, and on the damage to the democratic process done by the associated lobbying and influence-peddling. But it causes deeper problems than this – money and effort used to support unproductive entrepreneurship is unavailable to support genuine innovation, to create new products and services that people and society want and need. In short, bad capitalism crowds out good capitalism, and innovation suffers.

Food nanotechnology – their Lordships deliberate

Today I found myself once again in Westminster, giving evidence to a House of Lords Select Committee, which is currently carrying out an inquiry into the use of nanotechnology in food. Readers not familiar with the intricacies of the British constitution need to know that the House of Lords is one of the branches of Parliament, the UK legislature, with powers to revise and scrutinise legislation, and through its select committees, hold the executive to account. Originally its membership was drawn from the hereditary peerage, with a few bishops thrown in; recently as part of a slightly ramshackle program of constitutional reform the influence of the hereditaries has been much reduced, with the majority of the chamber being made up of members appointed for life by the government. These are drawn from former politicians and others prominent in public life. Whatever the shortcomings of this system from the democratic point of view, it does mean that the membership includes some very well informed people. This inquiry, for example, is being chaired by Lord Krebs, a very distinguished scientist who previously chaired the Food Standards Agency.

All the evidence submitted to the committee is publicly available on their website; this includes submissions from NGOs, Industry Organisations, scientific organisations and individual scientists. There’s a lot of material there, but together it’s actually a pretty good overview of all sides of the debate. I’m looking forward to seeing their Lordships’ final report.

Deja vu all over again?

Today the UK’s Royal Commission on Environmental Pollution released a new report on the potential risks of new nanomaterials and the implications of this for regulation and the governance of innovation. The report – Novel Materials in the Environment: The case of nanotechnology is well-written and thoughtful, and will undoubtedly have considerable impact. Nonetheless, four years after the Royal Society report on nanotechnology, nearly two years after the Council of Science and Technology’s critical verdict on the government’s response to that report, some of the messages are depressingly familiar. There are real uncertainties about the potential impact of nanoparticles on human health and the environment; to reduce these uncertainties some targeted research is required; this research isn’t going to appear by itself and some co-ordinated programs are needed. So what’s new this time around?

Andrew Maynard picks out some key messages. The Commission is very insistent on the need to move beyond considering nanomaterials as a single class; attempts to regulate solely on the basis of size are misguided and instead one needs to ask what the materials do and how they behave. In terms of the regulatory framework, the Commission was surprisingly (to some observers, I suspect) sanguine about the suitability and adaptability of the EU’s regulatory framework for chemicals, REACH, which, it believes, can readily be modified to meet the special challenges of nanomaterials, as long as the research needed to fill the knowledge gaps gets done.

Where the report does depart from some previous reports is in a rather subtle and wide-ranging discussion of the conceptual basis of regulation for fast-moving new technologies. It identifies three contrasting positions, none of which it finds satisfactory. The “pro-innovation” position calls for regulators to step back and let the technology develop unhindered, pausing only when positive evidence of harm emerges. “Risk-based” approaches allow for controls to be imposed, but only when clear scientific grounds for concern can be stated, and with a balance between the cost of regulating and the probability and severity of the danger. The “precautionary” approach puts the burden of proof on the promoters of new technology to show that it is, beyond any reasonable doubt, safe, before it is permitted. The long history of unanticipated consequences of new technology warn us against the first stance, while the second position assumes that the state of knowledge is sufficient to do these risk/benefit analyses with confidence, which isn’t likely to be the case for most fast moving new technologies. But the precautionary approach falls down, too, if, as the Commission accepts, the new technologies have the potential to yield significant benefits that would be lost if they were to be rejected on the grounds of inevitably incomplete information. To resolve this dilemma, the Commission seeks an adaptive system of regulation that seeks, above all, to avoid technological inflexibility. The key, in their view, is to innovate in a way that doesn’t lead society down paths from which it is difficult to reverse, if new information should arise about unanticipated threats to health or the environment.

The report has generated a substantial degree of interest in the press, and, needless to say, the coverage doesn’t generally reflect these subtle discussions. At one end, the coverage is relatively sober, for example Action urged over nanomaterials, from the BBC, and Tight regulation urged on nanotechnology, from the Financial Times. In the Daily Mail, on the other hand, we have Tiny but toxic: Nanoparticles with asbestos-like properties found in everyday goods. Notwithstanding Tim Harper’s suggestion that some will welcome this sort of coverage if it injects some urgency into the government’s response, this is not a good place for nanotechnology to be finding itself.

Nanocosmetics in the news

Uncertainties surrounding the use of nanoparticles in cosmetics made the news in the UK yesterday; this followed a press release from the consumer group Which? – Beauty must face up to nano. This is related to a forthcoming report in their magazine, in which a variety of cosmetic companies were asked about their use of nanotechnologies (I was one of the experts consulted for commentary on the results of these inquiries).

The two issues that concern Which? are some continuing uncertainties about nanoparticle safety and the fact that it hasn’t generally been made clear to consumers that nanoparticles are being used. Their head of policy, Sue Davies, emphasizes that their position isn’t blanket opposition: “We’re not saying the use of nanotechnology in cosmetics is a bad thing, far from it. Many of its applications could lead to exciting and revolutionary developments in a wide range of products, but until all the necessary safety tests are carried out, the simple fact is we just don’t know enough.” Of 67 companies approached for information about their use of nanotechnologies, only 8 replied with useful information, prompting Sue to comment: “It was concerning that so few companies came forward to be involved in our report and we are grateful for those that were responsible enough to do so. The cosmetics industry needs to stop burying its head in the sand and come clean about how it is using nanotechnology.”

On the other hand, the companies that did supply information include many of the biggest names – L’Oreal, Unilever, Nivea, Avon, Boots, Body Shop, Korres and Green People – all of whom use nanoparticulate titanium dioxide (and, in some cases, nanoparticulate zinc oxide). This makes clear just how widespread the use of these materials is (and goes someway to explaining where the estimated 130 tonnes of nanoscale titanium dioxide being consumed annually in the UK is going).

The story is surprisingly widely covered by the media (considering that yesterday was not exactly a slow news day). Many focus on the angle of lack of consumer information, including the BBC, which reports that “consumers cannot tell which products use nanomaterials as many fail to mention it”, and the Guardian, which highlights the poor response rate. The story is also covered in the Daily Telegraph, while the Daily Mail, predictably, takes a less nuanced view. Under the headline The beauty creams with nanoparticles that could poison your body, the Mail explains that “the size of the particles may allow them to permeate protective barriers in the body, such as those surrounding the brain or a developing baby in the womb.”

What are the issues here? There is, if I can put it this way, a cosmetic problem, in that there are some products on the market making claims that seem at best unwise – I’m thinking here of the claimed use of fullerenes as antioxidants in face creams. It may well be that these ingredients are present in such small quantities that there is no possibility of danger, but given the uncertainties surrounding fullerene toxicology putting products like this on the market doesn’t seem very smart, and is likely to cause reputational damage to the whole industry. There is a lot more data about nanoscale titanium dioxide, and the evidence that these particular nanoparticles aren’t able to penetrate healthy skin looks reasonably convincing. They deliver an unquestionable consumer benefit, in terms of screening out harmful UV rays, and the alternatives – organic small molecule sunscreens – are far from being above suspicion. But, as pointed out by the EU’s Scientific Committee on Consumer Products, there does remain uncertainty about the effect of titanium dioxide nanoparticles on damaged and sun-burned skin. Another issue recently highlighted by Andrew Maynard is the issue of the degree to which the action of light on TiO2 nanoparticles causes reactive and potentially damaging free radicals to be generated. This photocatalytic activity can be suppressed by the choice of crystalline structure (the rutile form of titanium dioxide should be used, rather than anatase), the introduction of dopants, and coating the surface of the nanoparticles. The research cited by Maynard makes it clear that not all sunscreens use grades of titanium dioxide that do completely suppress photocatalytic activity.

This poses a problem. Consumers don’t at present have ready access to information as to whether nanoscale titanium dioxide is used at all, let alone whether the nanoparticles in question are in the rutile or anatase form. Here, surely, is a case where if the companies following best practise provided more information, they might avoid their reputation being damaged by less careful operators.

What’s meant by “food nanotechnology”?

A couple of weeks ago I took part in a dialogue meeting in Brussels organised by the CIAA, the Confederation of the Food and Drink Industries of the EU, about nanotechnology in food. The meeting involved representatives from big food companies, from the European Commission and agencies like the European Food Safety Association, together with consumer groups like BEUC, and the campaigning group Friends of the Earth Europe. The latter group recently released a report on food nanotechnology – Out of the laboratory and on to our plates: Nanotechnology in food and agriculture; according to the press release, this “reveals that despite concerns about the toxicity risks of nanomaterials, consumers are unknowingly ingesting them because regulators are struggling to keep pace with their rapidly expanding use.” The position of the CIAA is essentially that nanotechnology is an interesting technology currently in research rather than having yet made it into products. One can get a good idea of the research agenda of the European food industry from the European Technology Platform Food for Life. As the only academic present, I tried in my contribution to clarify a little the different things people mean by “food nanotechnology”. Here, more or less, is what I said.

What makes the subject of nanotechnology particularly confusing and contentious is the ambiguity of the definition of nanotechnology when applied to food systems. Most people’s definitions are something along the lines of “the purposeful creation of structures with length scales of 100 nm or less to achieve new effects by virtue of those length-scales”. But when one attempts to apply this definition in practise one runs into difficulties, particularly for food. It’s this ambiguity that lies behind the difference of opinion we’ve heard about already today about how widespread the use of nanotechnology in foods is already. On the one hand, Friends of the Earth says they know of 104 nanofood products on the market already (and some analysts suggest the number may be more than 600). On the other hand, the CIAA (the Confederation of Food and Drink Industries of the EU) maintains that, while active research in the area is going on, no actual nanofood products are yet on the market. In fact, both parties are, in their different ways, right; the problem is the ambiguity of definition.

The issue is that food is naturally nano-structured, so that too wide a definition ends up encompassing much of modern food science, and indeed, if you stretch it further, some aspects of traditional food processing. Consider the case of “nano-ice cream”: the FoE report states that “Nestlé and Unilever are reported to be developing a nano- emulsion based ice cream with a lower fat content that retains a fatty texture and flavour”. Without knowing the details of this research, what one can be sure of is that it will involve essentially conventional food processing technology in order to control fat globule structure and size on the nanoscale. If the processing technology is conventional (and the economics of the food industry dictates that it must be), what makes this nanotechnology, if anything does, is the fact that analytical tools are available to observe the nanoscale structural changes that lead to the desirable properties. What makes this nanotechnology, then, is simply knowledge. In the light of the new knowledge that new techniques give us, we could even argue that some traditional processes, which it now turns out involve manipulation of the structure on the nanoscale to achieve some desirable effects, would constitute nanotechnology if it was defined this widely. For example, traditional whey cheeses like ricotta are made by creating the conditions for the whey proteins to aggregate into protein nanoparticles. These subsequently aggregate to form the particulate gels that give the cheese its desirable texture.

It should be clear, then, that there isn’t a single thing one can call “nanotechnology” – there are many different technologies, producing many different kinds of nano-materials. These different types of nanomaterials have quite different risk profiles. Consider cadmium selenide quantum dots, titanium dioxide nanoparticles, sheets of exfoliated clay, fullerenes like C60, casein micelles, phospholipid nanosomes – the risks and uncertainties of each of these examples of nanomaterials are quite different and it’s likely to be very misleading to generalise from any one of these to a wider class of nanomaterials.

To begin to make sense of the different types of nanomaterial that might be present in food, there is one very useful distinction. This is between engineered nanoparticles and self-assembled nanostructures. Engineered nanoparticles are covalently bonded, and thus are persistent and generally rather robust, though they may have important surface properties such as catalysis, and they may be prone to aggregate. Examples of engineered nanoparticles include titanium dioxide nanoparticles and fullerenes.

In self-assembled nanostructures, though, molecules are held together by weak forces, such as hydrogen bonds and the hydrophobic interaction. The weakness of these forces renders them mutable and transient; examples include soap micelles, protein aggregates (for example the casein micelles formed in milk), liposomes and nanosomes and the microcapsules and nanocapsules made from biopolymers such as starch.

So what kind of food nanotechnology can we expect? Here are some potentially important areas:

• Food science at the nanoscale. This is about using a combination of fairly conventional food processing techniques supported by the use of nanoscale analytical techniques to achieve desirable properties. A major driver here will be the use of sophisticated food structuring to achieve palatable products with low fat contents.
• Encapsulating ingredients and additives. The encapsulation of flavours and aromas at the microscale to protect delicate molecules and enable their triggered or otherwise controlled release is already widespread, and it is possible that decreasing the lengthscale of these systems to the nanoscale might be advantageous in some cases. We are also likely to see a range of “nutriceutical” molecules come into more general use.
• Water dispersible preparations of fat-soluble ingredients. Many food ingredients are fat-soluble; as a way of incorporating these in food and drink without fat manufacturers have developed stable colloidal dispersions of these materials in water, with particle sizes in the range of hundreds of nanometers. For example, the substance lycopene, which is familiar as the molecule that makes tomatoes red and which is believed to offer substantial health benefits, is marketed in this form by the German company BASF.

What is important in this discussion is clarity – definitions are important. We’ve seen discrepancies between estimates of how widespread food nanotechnology is in the marketplace now, and these discrepancies lead to unnecessary misunderstanding and distrust. Clarity about what we are talking about, and a recognition of the diversity of technologies we are talking about, can help remove this misunderstanding and give us a sound basis for the sort of dialogue we’re participating in today.

From micro to nano for medical applications

I spent yesterday at a meeting at the Institute of Mechanical Engineers, Nanotechnology in Medicine and Biotechnology, which raised the question of what is the right size for new interventions in medicine. There’s an argument that, since the basic operations of cell biology take place on the nano-scale, that’s fundamentally the right scale for intervening in biology. On the other hand, given that many current medical interventions are very macroscopic, operating on the micro-scale may already offer compelling advantages.

A talk from Glasgow University’s Jon Cooper gave some nice examples illustrating this. His title was Integrating nanosensors with lab-on-a-chip for biological sensing in health technologies, and he began with some true nanotechnology. This involved a combination of fluid handling systems for very small volumes with nanostructured surfaces, with the aim of detecting single biomolecules. This depends on a remarkable effect known as surface enhanced Raman scattering. Raman scattering is a type of spectroscopy that can detect chemical groups with what is normally rather low sensitivity. But if one illuminates metals with very sharp asperities, this hugely magnifies the light field very close to the surface, increasing sensitivity by a factor of ten million or so. Systems based on this effect, using silver nanoparticles coated so that pathogens like anthrax will stick to them, are already in commercial use. But Cooper’s group uses, not free nano-particles, but very precisely structured nanosurfaces. Using electron beam lithography his group creates silver split-ring resonators – horseshoe shapes about 160 nm across. With a very small gap one can get field enhancements of a factor of one hundred billion, and it’s this that brings single molecule detection into prospect.

On a larger scale, Cooper described systems to probe the response of single cells – his example involved using a single heart cell (a cardiomyocyte) to screen responses to potential heart drugs. This involved a pico-litre scale microchamber adjacent to an array of micron size thermocouples, which allow one to monitor the metabolism of the cell as it responds to a drug candidate. His final example was on the millimeter scale, though its sensors incorporated nanotechnology at some level. This was a wireless device incorporating an electrochemical blood sensor – the idea was that one would swallow this to screen for early signs of bowel cancer. Here’s an example where, obviously, smaller would be better, but how small does one need to go?