This entry isn’t really about nanotechnology at all; instead it’s a ramble around some mathematics that I find interesting, that suddenly seems to have become all too relevant in the financial crisis we find ourselves in. I don’t claim great expertise in finance, so my apologies in advance for any inaccuracies.
Brownian motion – the continuous random jiggling of nanoscale objects and structures that’s a manifestation of the random nature of heat energy – is a central feature of the nanoscale world, and much of my writing about nanotechnology revolves around how we should do nanoscale engineering in a way that exploits Brownian motion, in the way biology does. In this weekend’s magazine reading, I was struck to see some of the familiar concepts from the mathematics of Brownian motion showing up, not in Nature, but in an article in The Economist’s special section on the future of the finance – In Plato’s Cave, which explains how much of the financial mess we find ourselves in derives from the misapplication of these ideas. Here’s my attempt to explain, as simply as possible, the connection.
The motion of a particle undergoing Brownian motion can be described as a random walk, with a succession of steps in random directions. For every step taken in one direction, there’s an equal probability that the particle will go the same distance in the opposite direction, yet on average a particle doing a random walk does make some progress – the average distance gone grows as the square root of the number of steps. To see this for a simple situation, imagine that the particle is moving on a line, in one dimension, and either takes a step of one unit to the right (+1) or one unit to the left (-1), so we can track its progress just by writing down all the steps and adding them up, like this, for example: (+1 -1 +1 …. -1) . After N steps, on average the displacement (i.e. the distance gone, including a sign to indicate the direction) will be zero, but the average magnitude of the distance isn’t zero. To see this, we just look at the square root of the average value of the square of the displacement (since squaring the displacement takes away any negative signs). So we need to expand a product that looks something like (+1 -1 +1 …. -1) x (+1 -1 +1 …. -1). The first term of the first bracket times the first term of the second bracket is always +1 (since we either have +1 x +1 or -1 x -1), and the same is true for all the products of terms in the same position in both brackets. There are N of these, so this part of the product adds up to N. All the other terms in the expansion are one of (+1 x +1), (+1 x -1), (-1 x +1), (-1 x -1), and if the successive steps in the walk really are uncorrelated with each other these occur with equal probability so that on average adding all these up gives us zero. So we find that the mean squared distance gone in N steps is N. Taking the square root of this to get a measure of the average distance gone in N steps, we find this (root mean squared) distance is the square root of N.
The connection of these arguments to financial markets is simple. According the efficient market hypothesis, at any given time all the information relevant to the price of some asset, like a share, is already implicit in its price. This implies that the movement of the price with time is essentially a random walk. So, if you need to calculate what a fair value is for, say, an option to buy this share in a year’s time, you can do this equipped with statistical arguments about the likely movement of a random walk, of the kind I’ve just outlined. It is a smartened-up version of the theory of random walks that I’ve just explained that is the basis of the Black-Scholes model for pricing options, which is what made the huge expansion of trading of complex financial derivatives possible – as the Economist article puts it “The Black-Scholes options-pricing model was more than a piece of geeky mathematics. It was a manifesto, part of a revolution that put an end to the anti-intellectualism of American finance and transformed financial markets from bull rings into today’s quantitative powerhouses… The new model showed how to work out an option price from the known price-behaviour of a share and a bond. … . Confidence in pricing gave buyers and sellers the courage to pile into derivatives. The better that real prices correlate with the unknown option price, the more confidently you can take on any level of risk.”
Surely such a simple model can’t apply to a real market? Of course, we can develop more complex models that lift many of the approximations in the simplest theory, but it turns out that some of the key results of the theory remain. The most important result is the basic √N scaling of the expected movement. For example, my simple derivation assumed all steps are the same size – we know that some days, prices rise or fall a lot, sometimes not so much. So what happens if we have a random walk with step sizes that are themselves random. It’s easy to convince oneself that the derivation stays the same, but instead of adding up N occurrences of (-1 x -1) or (+1 x +1) we have N occurrences of (a x a), where the probability that the step size has value a is given by p(a). So we end up with the simple modification that the mean squared distance gone is N times the mean of the square of the step size. So this is a fairly simple modification, which, crucially, doesn’t affect the √N scaling.
But, and this is the big but, there’s a potentially troublesome hidden assumption here, which is that the distribution of step sizes actually has a well defined, well behaved mean squared value. We’d probably guess that the distribution of step sizes looks like a bell shaped curve, centred on zero and getting smaller the further away one gets from the origin. The familiar Gaussian curve fits the bill, and indeed such a curve is characterised by a well defined mean squared value which measures the width of the curve ( mathematically, a Gaussian is described by a distribution of step sizes a given by p(a) proportional to exp(-a/2s^2), which gives a root mean squared value of step size s). Gaussian curves are very common, for reasons described later, so this all looks very straightforward. But one should be aware that not all bell-shaped curves behave so well. Consider a distribution of step sizes a given by p(a) proportional to 1/(a^2+s^2). This curve (which is known in the trade as a Lorentzian), looks bell shaped and is characterised by a width s. But, when we try to find the average value of the square of the step size, we get an answer that diverges – it’s effectively infinite. The problem is that although the probability of getting a very large step goes to zero as the step size gets larger, it doesn’t go to zero very fast. Rather than the chance of a very large jump becoming exponentially small, as happens for a Gaussian, the chance goes to zero as the inverse square of the step size. This apparently minor difference is enough to completely change the character of the random walk. One needs entirely new mathematics to describe this sort of random walk (which is known as a Levy flight) – and in particular one ends up with a different scaling of the distance gone with the number of steps.
In the jargon, this kind of distribution is known as having a “fat tail”, and it was not factoring in the difference between a fat tailed distribution and a Gaussian or normal distribution that led the banks to so miscalculate their “value at risk”. In the words of the Economist article, the mistake the banks made “was to turn a blind eye to what is known as “tail risk”. Think of the banks’ range of possible daily losses and gains as a distribution. Most of the time you gain a little or lose a little. Occasionally you gain or lose a lot. Very rarely you win or lose a fortune. If you plot these daily movements on a graph, you get the familiar bell-shaped curve of a normal distribution (see chart 4). Typically, a VAR calculation cuts the line at, say, 98% or 99%, and takes that as its measure of extreme losses. However, although the normal distribution closely matches the real world in the middle of the curve, where most of the gains or losses lie, it does not work well at the extreme edges, or “tails”. In markets extreme events are surprisingly common—their tails are “fat”. Benoît Mandelbrot, the mathematician who invented fractal theory, calculated that if the Dow Jones Industrial Average followed a normal distribution, it should have moved by more than 3.4% on 58 days between 1916 and 2003; in fact it did so 1,001 times. It should have moved by more than 4.5% on six days; it did so on 366. It should have moved by more than 7% only once in every 300,000 years; in the 20th century it did so 48 times.”
But why should the experts in the banks have made what seems such an obvious mistake? One possibility goes back to the very reason why the Gaussian, or normal, distribution, is so important and seems so ubiquitous. This comes from a wonderful piece of mathematics called the central limit theorem. This says that if some random variable is made up from the combination of many independent variables, even if those variables aren’t themselves taken from a Gaussian distribution, their sum will be in the limit of many variables. So, given that market movements are the sum of the effects of lots of different events, the central limit theorem would tell us to expect the size of the total market movement to be distributed according to a Gaussian, even if the individual events were described by a quite different distribution. The central limit theorem has a few escape clauses, though, and perhaps the most important one arises from the way one approaches the limit of large numbers. Roughly speaking, the distribution converges to a Gaussian in the middle first. So it’s very common to find empirical distributions that look Gaussian enough in the middle, but still have fat tails, and this is exactly the point Mandelbrot is quoted as making about the Dow Jones.
The Economist article still leaves me puzzled, though as everything I’ve been describing has been well known for many years. But maybe well known isn’t the same as widely understood. Just like a lottery, the banks were trading the certainty of many regular small payments against a small probability of making a big payout. But, unlike the lottery, they didn’t get the price right, because they underestimated the probability of making a big loss. And now, their loss becomes the loss of the world’s taxpayers.
8 thoughts on “Brownian motion and how to run a lottery (or a bank)”
I am afraid that the truth is more scary…
Nobody really believed these models were anything but heuristics, but ideology got in the way and deregulation primarily under the Clintons was undertaken.
This deregulation allowed the now defunct investment banking sector to be both advisors and investors creating collossal conflicts of interest!
The point being that Banks would buy derivatives from the investment banks, and when the regulators asked if provisions were in place, they would go back to the same investment banks for advise!
There were so called “independent rating agencies” but these agencies were being paid by the investment banking sector!
This tragedy is all too human!
Lets be clear about this tragedy. Ninety Five percent of all loans are being repaid! Default on only Five percent cannot be blamed on the equations. Banks should have had provisions for at least 10% to 15% of all outstanding loans.
Feeling the pain
If derivatives are based on Black-Scholes model (I blame neocon finance policy), they crowded out their own model in years of oscillation by capitalizing a decade’s worth of global GDP. Financial contracts need to stay simple enough for portfolio managers to understand.
The random walk sounds like the concept of “shape-space” sketched out in J.Barbour’s: The End of Time.
I wonder what the credit crunch is doing to University research budgets and startup financing in the UK? Is enrollment down? Has London’s banking deregulation made the problem worse? UK still has low federal debt.
Zelah and Phillip, my apologies, I should have replied to you ages ago.
Zelah, you say that “no-one believed these models were anything but heuristics”, which may have some truth in it. But if one looks at the comments that the Economist quotes from David Viniar, chief financial officer of Goldman Sachs, the surprise expressed that events turned out differently to the way the models predicted to me speaks of someone who had pretty much internalised them.
Phillip, research budgets in the UK are OK at the moment as money is allocated on a 3-year budget cycle. Venture capital hasn’t entirely dried up (at least, so one of the most prominent UK tech VCs told me earlier this week) but the lack of exits is making things very difficult for them. But things will undoubtedly be difficult. As you say, the UK has some headroom as it has relatively low total public debt, but it entered the crisis with too big a structural deficit and too much dependence on the financial sector. Everyone anticipates a difficult time for research in a year or two. The science minister is reported to have made a speech in which he said “that the government would continue its support for basic research but asked the audience whether the time had come to make choices about the balance of investment in science that are compatible with industrial and economic priorities.”.
Nice article. However, I am not grasping the logic behind the statement:
“According the efficient market hypothesis, at any given time all the information relevant to the price of some asset, like a share, is already implicit in its price. This implies that the movement of the price with time is essentially a random walk.”
Specifically, I do not see how the movement of a price with time will necessarily result in a random walk if the efficient market hypothesis is satisfied. Any elaboration would be most helpful.
I’m not an economist, but as I understand it the efficient market price hypothesis states that at any moment, all relevant information about a security is already taken in account in the price. In this sense the price at any moment is always ‘correct’, so the next price movement is entirely unpredictable – hence the random walk.
On this subject, I very much like the joke about the economist and his friend who are walking along a street. The friend says, “Look down there on the pavement – there’s a five pound note someone’s dropped – shall I pick it up?”, to which the economist replies, “don’t be ridiculous, there’s no note there. If there was a five pound note on the pavement, somebody would have picked it up already.”
You say that “The Economist article still leaves me puzzled” and the Economist finished by asking “There is a big role for judgment and intuition, things that managers are supposed to provide. Why have they failed?”
I think the answer is that the banks and their employees do not seek truth and accuracy. They are paid to make a profit – to find ways of extracting cash from the flows of money that passes through the banks. If they can extract the cash and leave with it, they have done their job.
The regulators and lawmakers are the ones who are paid to ensure a fair and low-risk system. It was their responsibility to discover flaws in the models.
The economist in your joke worked for the government, not for a bank. The banker would just pick up the £5.
I’ve just discovered that my point is supported at high levels – Bank of England Deputy Governor John Gieve is quoted in the Wall Street Journal: “We cannot leave risk management to the banks,” he said. “Not only may they get it wrong but their risk systems, like their marketing, are directed at their competitive advantage and they are not motivated or in a position to look after the system as a whole.”
The bankers have a legal, fiduciary duty to make money, not for themselves, but for the owners of their businesses – the shareholders. Since the shareholders’ value has been wiped out or much reduced, the bankers have failed, and they alone are ultimately responsible for that.
Comments are closed.