Optimism – and realism – about solar energy

10 days ago I was fortunate enough to attend the Winton Symposium in Cambridge (where I’m currently spending some time as a visiting researcher in the Optoelectronics Group at the Cavendish Laboratory). The subject of the symposium was Harvesting the Energy of the Sun, and they had a stellar cast of international speakers addressing different aspects of the subject. This sums up some what I learnt from the day about the future potential for solar energy, together with some of my own reflections.

The growth of solar power – and the fall in its cost – over the last decade has been spectacular. Currently the world is producing about 10 billion standard 5 W silicon solar cells a year, at a current cost of €1.29 each; the unsubsidised cost of solar power in the sunnier parts of the world is heading down towards 5 cents a kWh, and at current capacity and demand levels, we should see 1 TW of solar power capacity in the world by 2030, compared to current estimates that installed capacity will reach about 300 GW at the end of this year (with 70 GW of that added in 2016).

But that’s not enough. The Paris Agreement – ratified so far by major emitters such as the USA, China, India, France and Germany (with the UK promising to ratify by the end of the year – but President-Elect Trump threatening to take the USA out) – commits countries to taking action to keep the average global temperature rise from pre-industrial times to below 2° C. Already the average temperature has risen by one degree or so, and currently the rate of increase is about 0.17° a decade. The point stressed by Sir David King was that it isn’t enough just to look at the consequences of the central prediction, worrying enough though they might be – one needs to insure against the very real risks of more extreme outcomes. What concerns governments in India and China, for example, is the risk of the successive failure of three rice harvests.

To achieve the Paris targets, the installed solar capacity we’re going to need by 2030 is estimated as being in the range 8-10 TW nominal; this would require 22-25% annual growth rate in manufacturing capacity. Continue reading “Optimism – and realism – about solar energy”

Why isn’t the UK the centre of the organic electronics industry?

In February 1989, Jeremy Burroughes, at that time a postdoc in the research group of Richard Friend and Donal Bradley at Cambridge, noticed that a diode structure he’d made from the semiconducting polymer PPV glowed when a current was passed through it. This wasn’t the first time that interesting optoelectronic properties had been observed in an organic semiconductor, but it’s fair to say that it was the resulting Nature paper, which has now been cited more than 8000 times, that really launched the field of organic electronics. The company that they founded to exploit this discovery, Cambridge Display Technology, was floated on the NASDAQ in 2004 at a valuation of $230 million. Now organic electronics is becoming mainstream; a popular mobile phone, the Samsung Galaxy S, has an organic light emitting diode screen, and further mass market products are expected in the next few years. But these products will be made in factories in Japan, Korea and Taiwan; Cambridge Display Technology is now a wholly owned subsidiary of the Japanese chemical company Sumitomo. How is it that despite an apparently insurmountable academic lead in the field, and a successful history of University spin-outs, that the UK is likely to end up at best a peripheral player in this new industry? Continue reading “Why isn’t the UK the centre of the organic electronics industry?”

Can plastic solar cells deliver?

The promise of polymer solar cells is that they will be cheap enough and produced on a large enough scale to transform our energy economy, unlocking the sun’s potential to meet all our energy needs in a sustainable way. But there’s a long way to go from a device in a laboratory, or even a company’s demonstrator product, to an economically viable product that can be made at scale. How big is that gap, are there insuperable obstacles standing in the way, and if not, how long might it take us to get there? Some answers to these questions are now beginning to emerge, and I’m cautiously optimistic. Although most attention is focused on efficiency, the biggest outstanding technical issue is to prolong the lifetime of the solar cells. But before plastic solar cells can be introduced on a mass scale, it’s going to be necessary to find a substitute for indium tin oxide as a transparent electrode. But if we can do this, the way is open for a real transformation of our energy system.

The obstacles are both technical and economic – but of course it doesn’t make sense to consider these separately, since it is technical improvements that will make the economics look better. A recent study starts to break down the likely costs and identify where we need to find improvements. The paper – Economic assessment of solar electricity production from organic-based photovoltaic modules in a domestic environment, by Brian Azzopardi, from Manchester University, with coworkers from Imperial College, Cartagena, and Riso (Energy and Environmental Science 4 p3741, 2011) – breaks down an estimate of the cost of power generated by a polymer photovoltaic fabricated on a plastic substrate by a manufacturing process already at the prototype stage. This process uses the most common combination of materials – the polymer P3HT together with the fullerene derivative PCBM. The so-called “levelised power cost” – i.e. the cost per unit of electricity, including all capital costs, averaged over the lifetime of the plant, comes in between €0.19 and €0.50 per kWh for 7% efficient solar cells with a lifetime of 5 years, assuming southern European sunshine. This is, of course, too expensive both compared to alternatives like fossil fuel or nuclear energy, and to conventional solar cells, though the gap with conventional solar isn’t massive. But the technology is still immature, so what improvements in performance and reductions in cost is it reasonable to expect?

The two key technical parameters are efficiency and lifetime. Most research effort so far has concentrated on improving efficiencies – values greater than 4% are now routine for the P3HT/PCBM system; a newer system, involving a different fullerene derivative, PC70BM blended with the polymer PCDTBT (I find even the acronym difficult to remember, but for the record the full name is poly[9’-hepta-decanyl-2,7- carbazole-alt-5,5-(4’,7’-di-2-thienyl-2’,1’,3’-benzothiadiazole)]), achieves efficiencies greater than 6%. These values will improve, through further tweaking of the materials and processes. Azzopardi’s analysis suggests that efficiencies in the range 7-10% may already be looking viable… as long as the cells last long enough. This is potentially a problem – it’s been understood for a while that the lifetime of polymer solar cells may well prove to be their undoing. The active materials in polymer solar cells – conjugated polymer semiconductors – are essentially overgrown dyes, and we all know that dyes tend to bleach in the sun. Five years seems to be a minimum lifetime to make this a viable technology, but up to now many laboratory devices have struggled to last more than a few days. Another recent paper, however, gives grounds for more optimism. This paper – High Efficiency Polymer Solar Cells with Long Operating Lifetimes, Advanced Energy Materials 1 p491, 2011), from the Stanford group of Michael McGehee – demonstrates a PCDTBT/PC70BM solar cell with a lifetime of nearly seven years. This doesn’t mean all our problems are solved, though – this device was encapsulated in glass, rather than printed on a flexible plastic sheet. Glass is much better than plastics at keeping harmful oxygen away from the active materials; to reproduce this lifetime in an all-plastic device will need more work to improve the oxygen barrier properties of the module.

How does the cost of a plastic solar cell break down, and what reductions is it realistic to expect? The analysis by Azzopardi and coworkers shows that the cost of the system is dominated by the cost of the modules, and the cost of the modules is dominated by the cost of the materials. The other elements of the system cost will probably continue to decrease anyway, as much of this is shared in common with other types of solar cells. What we don’t know yet is the extent to which the special advantages of plastic solar cells over conventional ones – their lightness and flexibility – can reduce the installation costs. As we’ve been expecting, the cheapness of processing plastic solar cells means that manufacturing costs – including the capital costs of the equipment to make them – are small compared to the cost of materials. The cost of these materials make up 60-80% of the cost of the modules. Part of this is simply the cost of the semiconducting polymers; these will certainly reduce with time as experience grows at making them at scale. But the surprise for me is the importance of the cost of the substrate, or more accurately the cost of the thin, transparent conducting electrode which coats the substrate – this represents up to half of the total cost of materials. This is going to be a real barrier to the large scale uptake of this technology.

The transparent electrode currently used is a thin layer of indium tin oxide – ITO. This is a very widely used material in touch screens and liquid crystal displays, and it currently represents the major use of the metal indium, which is rare and expensive. So unless a replacement for ITO can be found, it’s the cost and availability of this material that’s going to limit the use of plastic solar cells. Transparency and electrical conductivity don’t usually go together, so it’s not straightforward to find a substitute. Carbon nanotubes, and more recently graphene, have been suggested, but currently they’re neither good enough by themselves, nor is there a process to make them cheaply at scale (a good summary of the current contenders can be found in Rational Design of Hybrid Graphene Films for High-Performance Transparent Electrodes by Zhu et al, ACS Nano 5 p6472, 2011). So, to make this technology work, much more effort needs to be put into finding a substitute for ITO.

The next twenty-five years

The Observer ran a feature today collecting predictions for the next twenty five years from commentators about politics, science, technology and culture. I contributed a short piece on nanotechnology: I’m not expecting a singularity. Here’s what I wrote:

Twenty years ago Don Eigler, a scientist working for IBM in California, wrote out the logo of his employer in letters made of individual atoms. This feat was a graphic symbol of the potential of the new field of nanotechnology, which promises to rebuild matter atom by atom, molecule by molecule, and to give us unprecedented power over the material world.

Some, like the futurist Ray Kurzweil, predict that nanotechnology will lead to a revolution, allowing us to make any kind of product virtually for free, to have computers so powerful that they will surpass human intelligence, and to lead to a new kind of medicine on a sub-cellular level that will allow us to abolish aging and death.

I don’t think Kurzweil’s “technological singularity” – a dream of scientific transcendence which echoes older visions of religious apocalypse – will happen. Some stubborn physics stands between us and “the rapture of the nerds”. But nanotechnology will lead to some genuinely transformative new applications.

New ways of making solar cells very cheaply on a very large scale offer us the best hope we have for providing low-carbon energy on a big enough scale to satisfy the needs of a growing world population aspiring to the prosperity we’re used to in the developed world. We’ll learn more about intervening in our biology at the sub-cellular level, and this nano-medicine will give us new hope of overcoming really difficult and intractable diseases, like Alzheimer’s, that will increasingly afflict our population as it ages. The information technology that drives your mobile phone or laptop is already operating at the nanoscale. Another twenty five years of development will lead us to a new world of cheap and ubiquitous computing, in which privacy will be a quaint obsession of our grandparents.

Nanotechnology is a different type of science, respecting none of the conventional boundaries between disciplines, and unashamedly focused on applications rather than fundamental understanding. Given the huge resources being directed towards nanotechnology in China and its neighbours, this may be the first major technology of the modern era that is predominantly developed outside the USA and Europe.

Nanotubes for flexible electronics

The glamorous applications for carbon nanotube in electronics focus on the use of individual nanotubes for nanoscale electronics – for example, this single nanotube integrated circuit reported by IBM a couple of years ago. But more immediate applications may come from using thin layers of nanotubes on flexible substrates as conductors or semiconductors – these could be used for thin film transistor arrays in applications like electronic paper. A couple of recent papers report progress in this direction.

From the group of John Rogers, at the University of Illinois, comes a Nature paper reporting integrated circuits on flexible substrates based on nanotubes. The paper (Editors summary in Nature, subscription required for full article) , whose first author is Qing Cao, describes the manufacture of an array of 100 transistors on a 50 µm plastic substrate. The transistors aren’t that small – their dimensions are in the micron range – so this is the sort of electronics that would be used to drive a display rather than as CPU or memory. But the performance of the transistors looks like it could be competitive with rival technologies for flexible displays, such as semiconducting polymers.

The difficulty with using carbon nanotubes for electronics this way is that the usual syntheses produce a mixture of different types of nanotubes, some conducting and some semiconducting. Since about a third of the nanotubes have metallic conductivity, a simple mat of nanotubes won’t behave like a semiconductor, because the metallic nanotubes will provide a short-circuit. Rogers’s group get this round this problem in an effective, if not terribly elegant, way. They cut the film with grooves, and for an appropriate combination of groove width and nanotube length they reduce the probability of finding a continuous metallic path between the electrodes to a very low level.

Another paper, published earlier this month in Science, offers what is potentially a much neater solution to this problem. The paper, “Self-Sorted, Aligned Nanotube Networks for Thin-Film Transistors” (abstract, subscription required for full article), has as its first author Melburne LeMieux, a postdoc in the group of Zhenan Bao at Stanford. They make their nanotube networks by spin-coating from solution. Spin-coating is a simple and very widely used technique for making thin films, which involves depositing a solution on a substrate spinning at a few thousand revolutions per minute. Most of the solution is flung off by the spinning disk, leaving a very thin uniform film, from which the solvent evaporates to leave the network of nanotubes. This simple procedure produces two very useful side-effects. Firstly, the flow in the solvent film has the effect of aligning the nanotubes, with obvious potential benefits for their electronic properties. Even more strikingly, the spin-coating process seems to provide an easy solution to the problem of sorting the metallic and semiconducting nanotubes. It seems that one can prepare the surface so that it is selectively sticky for one or other types of nanotubes; a surface presenting a monolayer of phenyl groups preferentially attracts the metallic nanotubes, while an amine coated surface yields nanotube networks with very good semiconducting behaviour, from which high performance transistors can be made.

A methanol economy?

Transport accounts for between a quarter and a third of primary energy use in developed economies, and currently this comes almost entirely from liquid hydrocarbon fuels. Anticipating a world with much more expensive oil and a need to dramatically reduce carbon dioxide emissions, many people have been promoting the idea of a hydrogen economy, in which hydrogen, generated in ways that minimise CO2 emissions, is used as a carrier of energy for transportation purposes. Despite its superficial attractiveness, and high profile political support, the hydrogen economy has many barriers to overcome before it becomes technically and economically feasible. Perhaps most pressing of these difficulties is the question of how this light, low energy density gas can be stored and transported. An entirely new pipeline infrastructure would be needed to move the hydrogen from the factories where it is made to filling stations, and, perhaps even more pressingly, new technologies for storing hydrogen in vehicles will need to be developed. Early hopes that nanotechnology would provide new and cost-effective solutions to these problems – for example, using carbon nanotubes to store hydrogen – don’t seem to be bearing fruit so far. Since using a gas as an energy carrier causes such problems, why don’t we stick with a flammable liquid? One very attractive candidate is methanol, whose benefits have been enthusiastically promoted by George Olah, a Nobel prize winning chemist from the University of Southern California, whose book Beyond Oil and Gas: The Methanol Economy describes his ideas in some technical detail.

The advantage of methanol as a fuel is that it is entirely compatible with the existing infrastructure for distributing and using gasoline; pipes, pumps and tanks would simply need some gaskets changed to switch over to the new fuel. Methanol is an excellent fuel for internal combustion engines; even the most hardened petrol-head should be convinced by the performance figures of a recently launched methanol powered Lotus Exige. However, in the future, greater fuel efficiency might be possible using direct methanol fuel cells if that technology can be improved.

Currently methanol is made from natural gas, but in principle it should be possible to make it economically by reacting carbon dioxide with hydrogen. Given a clean source of energy to make hydrogen (Olah is an evangelist for nuclear power, but if the scaling problems for solar energy were solved that would work too), one could recycle the carbon dioxide from fossil fuel power stations, in effective getting one more pass of energy out of it before releasing it into the atmosphere. Ultimately, it should be possible to extract carbon dioxide directly from the atmosphere, achieving in this way an almost completely carbon-neutral energy cycle. In addition to its use as a transportation fuel, it is also possible to use methanol as a feedstock for the petrochemical industry. In this way we could, in effect, convert atmospheric carbon dioxide into plastic.

Invisibility cloaks and perfect lenses – the promise of optical metamaterials

The idea of an invisibility cloak – a material which would divert light undetectably around an object – captured the imagination of the media a couple of years ago. For visible light, the possibility of an invisibility cloak remains a prediction, but it graphically illustrates the potential power of a line of research initiated a few years ago by the theoretical physicist Sir John Pendry of Imperial College, London. Pendry realised that constructing structures with peculiar internal structures of conductors and dielectrics would allow one to make what are in effect new materials with very unusual optical properties. The most spectacular of these new metamaterials would have a negative refractive index. In addition to making an invisibility cloak possible one could in principle use negative refractive index metamaterials to make a perfect lens, allowing one to use ordinary light to image structures much smaller than the limit of a few hundred nanometers currently set by the wavelength of light for ordinary optical microscopy. Metamaterials have been made which operate in the microwave range of the electromagnetic spectrum. But to make an optical metamaterial one needs to be able to fabricate rather intricate structures at the nanoscale. A recent paper in Nature Materials (abstract, subscription needed for full article) describes exciting and significant progress towards this goal. The paper, whose lead author is Na Liu, a student in the group of Harald Giessen at the University of Stuttgart, describes the fabrication of an optical metamaterial. This consists of a regular, three dimensional array of horseshoe shaped, sub-micron sized pieces of gold embedded in a transparent polymer – see the electron micrograph below. This metamaterial doesn’t yet have a negative refractive index, but it shows that a similar structure could have this remarkable property.

An optical metamaterial
An optical metamaterial consisting of split rings of gold in a polymer matrix. Electron micrograph from Harald Giessen’s group at 4. Physikalisches Institut, Universität Stuttgart.

To get a feel for how these things work, it’s worth recalling what happens when light goes through an ordinary material. Light, of course, consists of electromagnetic waves, so as a light wave passes a point in space there’s a rapidly alternating electric field. So any charged particle will feel a force from this alternating field. This leads to something of a paradox – when light passes through a transparent material, like glass or a clear crystal, it seems at first that the light isn’t interacting very much with the material. But since the material is full of electrons and positive nuclei, this can’t be right – all the charged particles in the material must be being wiggled around, and as they are wiggled around they in turn must be behaving like little aerials and emitting electromagetic radiation themselves. The solution to the paradox comes when one realises that all these waves emitted by the wiggled electrons interfere with each other, and it turns out that the net effect is of a wave propagating forward in the same direction as the light thats propagating through the material, only with a somewhat different velocity. It’s the ratio of this effective velocity in the material to the velocity the wave would have in free space that defines the refractive index. Now, in a structure like the one in the picture, we have sub-micron shapes of a metal, which is an electrical conductor. When this sees the oscillating electric field due to an incident light wave, the free electrons in the metal slosh around in a collective oscillation called a plasmon mode. These plasmons generate both electric and magnetic fields, whose behaviour depends very sensitively on the size and shape of the object in which the electrons are sloshing around in (to be strictly accurate, the plasmons are restricted to the region near the surface of the object; its the geometry of the surface that matters). If you design the geometry right, you can find a frequency at which both the magnetic and electric fields generated by the motion of the electrons is out of phase with the fields in the light wave that are exciting the plasmons – this is the condition for the negative refractive index which is needed for perfect lenses and other exciting possibilities.

The metamaterial shown in the diagram has a perfectly periodic pattern, and this is what’s needed if you want a uniform plane wave arriving at the material to excite another uniform plane wave. But, in principle, you should be able to design an metamaterial that isn’t periodic to direct and concentrate the light radiation any way you like, on length scales well below the wavelength of light. Some of the possibilities this might lead to were discussed in an article in Science last year, Circuits with Light at Nanoscales: Optical Nanocircuits Inspired by Metamaterials (abstract, subscription required for full article) by Nader Engheta at the University of Pennsylvania. If we can learn how to make precisely specified, non-periodic arrays of metallic, dielectric and semiconducting shaped elements, we should be able to direct light waves where we want them to go on the nanoscale – well below light’s wavelength. This might allow us to store information, to process information in all-optical computers, to interact with electrons in structures like quantum dots, for quantum computing applications, to image structures using light down to the molecular level, and to detect individual molecules with great sensitivity. I’ve said this before, but I’m more and more convinced that this is a potential killer application for advanced nanotechnology – if one really could place atoms in arbitrary, pre-prescribed positions with nanoscale accuracy, this is what one could do with the resulting materials.

Delivering genes

Gene therapy holds out the promise of correcting a number of diseases whose origin lies in the deficiency of a particular gene – given our growing knowledge of the human genome, and our ability to synthesise arbitrary sequences of DNA, one might think that the introduction of new genetic material into cells to remedy the effects of abnormal genes would be straightforward. This isn’t so. DNA is a relatively delicate molecule, and organisms have evolved efficient mechanisms for finding and eliminating foreign DNA. Viruses, on the other hand, whose entire modus operandi is to introduce foreign nucleic acids into cells, have evolved effective ways of packaging their payloads of DNA or RNA into cells. One approach to gene therapy co-opts viruses to deliver the new genetic material, though this sometimes has unpredicted and undesirable side-effects. So an effective, non-viral method of wrapping up DNA, introducing it into target cells and releasing it would be very desirable. My colleagues at Sheffield University, led by Beppe Battaglia, have recently demonstrated an effective and elegant way of introducing DNA into cells, in work recently reported in the journal Advanced Materials (subscription required for full paper).

The technique is based on the use of polymersomes, which I’ve described here before. Polymersomes are bags formed when detergent-like polymer molecules self-assemble to form a membrane which folds round on itself to form a closed surface. They are analogous to the cell membranes of biology, which are formed from soap-like molecules called phospholipids, and the liposomes that can be made in the laboratory from the same materials. Liposomes are used to wrap up and deliver molecules in some commercial applications already, including some drug delivery systems and in some expensive cosmetics. They’ve also been used in the laboratory to deliver DNA into cells, though they aren’t ideal for this purpose, as they aren’t very robust. Polymersomes allow one a great deal more flexibility in designing polymersomes with the properties one needs, and this flexibility is exploited to the full in Battaglia’s experiments.

To make a polymersome, one needs a block copolymer – a polymer with two or three chemically distinct sections joined together. One of these blocks needs to be hydrophobic, and one needs to be hydrophilic. The block copolymers used here, developed and synthesised in the group of Sheffield chemist Steve Armes, have two very nice features. The hydrophilic section is composed of poly(2-(methacryloyloxy)ethyl phosphorylcholine) – this is a synthetic polymer that presents the same chemistry to the adjoining solution as a naturally occurring phospholipid in a cell membrane. This means that polymersomes made from this material are able to circulate undetected within the body for longer than other water soluble polymers. The hydrophobic block is poly(2-(diisopropylamino)ethyl methacrylate). This is a weak base, so it has the property that its state of ionisation depends on the acidity of the solution. In a basic solution, it is un-ionized, and in this state it is strongly hydrophobic, while in an acidic solution it becomes charged, and in this state it is much more soluble in water. This means that polymersomes made from this material will be stable in neutral or basic conditions, but will fall apart in acid. Conversely, if one has the polymers in an acidic solution, together with the DNA one wants to deliver, and then neutralises the solution, polymersomes will spontaneously form, encapsulating the DNA.

The way these polymersomes work to introduce DNA into cells is sketched in the diagram below. On encountering a cell, the polymersome triggers the process of endocytosis, whereby the cell engulfs the polymersome in a little piece of cell membrane that is pinched off inside the cell. It turns out that the solution inside these endosomes is significantly more acidic than the surroundings, and this triggers the polymersome to fall apart, releasing its DNA. This, in turn, generates an osmotic pressure sufficient to burst open the endosome, releasing the DNA into the cell interior, where it is free to make its way to the nucleus.

The test of the theory is to see whether one can introduce a section of DNA into a cell and then demonstrate how effectively the corresponding gene is expressed. The DNA used in these experiments was the gene that codes for a protein that fluoresces – the famous green fluorescent protein, GFP, originally obtained from certain jelly-fish – making it easy to detect whether the protein coded for by the introduced gene has actually been made. In experiments using cultured human skin cells, the fraction of cells in which the new gene was introduced was very high, while few toxic effects were observed, in contrast to a control experiment using an existing, commercially available gene delivery system, which was both less effective at introducing genes and actually killed a significant fraction of the cells.

Polymersome endocytosis
A switchable polymersome as a vehicle for gene delivery. Beppe Battaglia, University of Sheffield.

Nanotechnology in Korea

One of my engagements in a slightly frantic period last week was to go to a UK-Korea meeting on collaboration in nanotechnology. This had some talks which gave a valuable insight into how the future of nanotechnology is seen in Korea. It’s clearly seen as central to their program of science and technology; according to some slightly out-of-date figures I have to hand about government spending on nanotechnology, Korea ranks 5th, after the USA, Japan, Germany and France, and somewhat ahead of the UK. Dr Hanjo Lim, of the Korea Science and Engineering Foundation, gave a particularly useful overview.

He starts out by identifying the different ways in which going small helps. Nanotechnology exploits a confluence of 3 types of benefits – nanomaterials exploit surface matter, in which benefits arise from their high surface to volume ratio, with most obvious benefits from catalysis. They exploit quantum matter, size dependent quantum effects that are so important for band gap engineering and making quantum dots. And they can exploit soft matter, which is so important for the bio-nano interface. As far as Korea is concerned, as a small country with a well-developed industrial base, he sees four important areas. Applications in information and communication technology will obviously directly impact the strong position Korea has in the semiconductor industry and the display industry, as well as having an impact on automobiles. Robots and ubiquitous devices play to Korea’s general comparative advantage in manufacturing, but applications in Nano foods and medical science are relatively weak in Korea at the moment. Finally, the environmentally important applications in Fuel/solar cells, air and water treatments will be of growing importance in Korea, as everywhere else.

Korea ranks 4th or 5th in the world in terms of nano-patents; the plan is, up to 2010, to expand existing strength in nanotechnology and industrialise this by developing technology specific to applications. Beyond that the emphasis will be on systems level integration and commercialisation of those developments. Clearly, in electronics we are already in the nano- era. Korea has a dominant position in flash memory, where Hwang’s law – that memory density doubles every year – represents a more aggressive scaling than Moore’s law. To maintain this will require perhaps carbon nanotubes or silicon nanowires. Lim finds nanotubes very attractive but given the need for control of chirality and position his prediction is that this is still more than 10 years until commercialisation. An area that he thinks will grow in importance is the integration of optical interconnects in electronics. This, in his view, will be driven by the speed and heat issues in CPU that arise from metal interconnects – he reminds us that a typical CPU has 10 km of electrical wire, so it’s no wonder that heat generation is a big problem, and Google’s data centres come equipped with 5 story cooling towers. Nanophotonics will enable integration of photonic components within silicon multi-chip CPUs – but the problem that silicon is not good for lasers will have to be overcome. Either lasers off the chip will have to be used, or silicon laser diodes developed. His prognosis is, recognising that we have box to box optical interconnects now, and board to board interconnects are coming, that we will have chip to chip intercoonnects on the 1 – 10 cm scale by 2010, with intrachip connects by 2010-2015.

Anyone interested in more general questions of the way the Korean innovation system is developing will find much to interest them in a recent Demos pamphlet: Korea: Mass innovation comes of age. Meanwhile, I’ll be soon reporting on nanotechnology in another part of Asia; I’m writing this from Bangalore/Bengalooru in India, where I will be talking tomorrow at Bangalore Nano 2007.

Less than Moore?

Some years ago, the once-admired BBC science documentary slot Horizon ran a program on nanotechnology. This was preposterous in many ways, but one sequence stands out in my mind. Michio Kaku appeared in front of scenes of rioting and mayhem, opining that “the end of Moore’s Law is perhaps the single greatest economic threat to modern society, and unless we deal with it we could be facing economic ruin.” Moore’s law, of course, is the observation, or rather the self-fulfilling prophecy, that the number of transistors on an integrated circuit doubles about every two years, with corresponding exponential growth in computing power.

As Gordon Moore himself observes in a presentation linked from the Intel site, “No Exponential is Forever … but We can Delay Forever (2 MB PDF). Many people have prematurely written off the semiconductor industry’s ability to maintain, over forty years, a record of delivering a nearly constant, year on year, percentage shrinking in circuits and increase in computing power. Nonetheless, there will be limits to how far the current CMOS-based technology can be pushed. These limits could arise from fundamental constraints of physics or materials science, or from engineering problems like the difficulties of managing the increasingly problematic heat output of densely packed components, or simply from the economic difficulties of finding business models that can make money in the face of the exponentially increasing cost of plant. The question, then, is not if Moore’s law, for conventional CMOS devices, will run out, but when.

What has underpinned Moore’s law is the International Technology Roadmap for Semiconductors, a document which effectively choreographs the research and development required to deliver the continual incremental improvements on our current technology that are needed to keep Moore’s law on track. It’s a document that outlines the requirements for an increasingly demanding series of linked technological breakthroughs as time marches on; somewhere between 2015 and 2020 a crunch comes, with many problems for which solutions look very elusive. Beyond this time, then, there are three possible outcomes. It could be that these problems, intractable though they look now, will indeed be solved, and Moore’s law will continue through further incremental developments. The history of the semiconductor industry tells us that this possibility should not be lightly dismissed; Moore’s law has already been written off a number of times, only for the creativity and ingenuity of engineers and scientists to overcome what seemed like insuperable problems. The second possibility is that a fundamentally new architecture, quite different from CMOS, will be developed, giving Moore’s law a new lease of life, or even permitting a new jump in computer power. This, of course, is the motivation for a number of fields of nanotechnology. Perhaps spintronics, quantum computing, molecular electronics, or new carbon-based electronics using graphene or nanotubes will be developed to the point of commercialisation in time to save Moore’s law. For the first time, the most recent version of the semiconductor roadmap did raise this possibility, so it deserves to be taken seriously. There is much interesting physics coming out of laboratories around the world in this area. But none of these developments are very close to making it out of the lab into a process or a product, so we need to at least consider the possibility that it won’t arrive in time to save Moore’s law. So what happens if, for the sake of argument, Moore’s law peters out in about ten years time, leaving us with computers perhaps one hundred times more powerful than the ones we have now that take more than a few years to become obsolete. Will our economies collapse and our streets fill with rioters?

It seems unlikely. Undoubtedly, innovation is a major driver of economic growth, and the relentless pace of innovation in the semiconductor industry has contibuted greatly to the growth we’ve seen in the last twenty years. But it’s a mistake to suppose that innovation is synonymous with invention; new ways of using existing inventions can be as great a source of innovation as new inventions themselves. We shouldn’t expect that a period of relatively slow innovation in hardware would mean that there would be no developments in software; on the contrary, as raw computing power gets less superabundant we’d expect ingenuity in making the most of available power to be greatly rewarded. The economics of the industry would change dramatically, of course. As the development cycle lengthened the time needed to amortise the huge capital cost of plant would stretch out and the business would become increasingly commoditised. Even as the performance of chips plateaued, their cost would drop, possibly quite precipitously; these would be the circumstances in which ubiquitous computing truly would take off.

For an analogy, one might want to look a century earlier. Vaclav Smil has argued, in his two-volume history of technology of the late nineteenth and twentieth century (Creating the Twentieth Century: Technical Innovations of 1867-1914 and Their Lasting Impact and Transforming the Twentieth Century: Technical Innovations and Their Consequences ), that we should view the period 1867 – 1914 as a great technological saltation. Most of the significant inventions that underlay the technological achievements of the twentieth century – for example, electricity, the internal combustion engine, and powered flight – were made in this short period, with the rest of the twentieth century being dominated by the refinement and expansion of these inventions. Perhaps we will, in the future, look back on the period 1967 – 2014, in a similar way, as a huge spurt of invention in information and communication technology, followed by a long period in which the reach of these inventions continued to spread throughout the economy. Of course, this relatively benign scenario depends on our continued access to those things on which our industrial economy is truly existentially dependent – sources of cheap energy. Without that, we truly will see economic ruin.