On Benoit Mandelbrot’s “New Methods in Statistical Economics” [Part 1]

I’ve been reading Mandelbrot’s 1963 paper “New Methods in Statistical Economics.” It’s one of his many papers from the 1960s where he argues that people should pay more attention to non-Normal distributions (see my post). This is my attempt to explain the first two sections of his piece.

(My main purpose is to clarify things for myself here, so if you have questions or can explain issues with my exposition I would VERY VERY appreciate hearing from you!)

I come to the paper with two interests. First, Mandelbrot loudly argued that the price fluctuations of many commodities and securities is best thought of as nonGaussian. This is a paper where he makes that argument. That’s cool because finance is cool and important and interesting.

But the other reason is because just a few years after this, Mandelbrot became Mr. Fractal and published “How Long is the Coast of Britain?” This was the piece that popularized the notion of fractal dimension (something Mandelbrot calls a Trojan Horse for the way it represented a safe, neutral topic that served as a vector for his dimensional ideas).

But Mandelbrot also said that parts of this paper — which really seems to have nothing to do with fractals — represent the seeds of his geometric ideas. So, for example, he says this in the appendix of the reprinted edition of this 1963 paper:

The many footnotes in the original, except one, were easily integrated in the text. But Footnote 4 did not fit, and it cried out to be emphasized, because it was an early allusion to the theme of self-similarity that came to dominate my life and led to fractals. This footnote 4 read as follows:

“The various criteria of invariance used by physicists are somewhat different in principle from those I propose in economics. For example, the principle of relativity was not introduced to explain a complicated empirical relation, such as scaling. I am indebted to Harrison White for suggesting that I should stress the nuances between my methods and those of physics.”

So what does that have to do with fractals? It’s a subtle thing! Let’s dive in to the paper to see where he’s coming from. I’ll quote (sometimes at length) and then comment below the text.

Mandelbrot:

The approach I use to study the scaling distribution arose from physics. It occurred to me that, before attempting to explain an empirical regularity, it would be a good idea to make sure that this empirical identity is “robust” enough to be actually observed. In other words, one must first examine carefully the conditions under which empirical observation is actually practiced. The scholar observes in order to describe but the entrepreneur observes in order to act. Both know that most economic quantities can hardly ever be observed directly and are usually altered by manipulations.

Here is a thought experiment involving biology, not physics, but I think it illustrates the point. Suppose that you have a theory about trees: that the distribution of tree heights follows a nice bell curve. 

You are a lazy scientist that doesn’t want to measure anything, let’s say. Anyway, the world is large and you didn’t get funding so you can’t travel. Fortunately, a lot of other people have already done measuring! You discover this by googling.

But suddenly you run into a problem. Sure, some people have directly measured the heights of trees. But other people measured the total height of forests. That stinks! Sure, you can divide the forests by the number of trees to get an average tree height to put into your tree data. But that’s going to be messy.

So it’s a challenge to deal with data coming from many different sources. And if the data needed to explore your theory only could come from measuring every tree (and if you really can’t do that) then you probably should work on a different problem.

In most practical problems, very little can be done about this difficulty, and one must be content with whatever approximation of the desired data is available. But the analytical formulas that express economic relationships cannot generally be expected to remain unaffected when the data are distorted by the transformations to which we shall turn momentarily. As a result, a relationship will be discovered more rapidly, and established with greater precision, if it “happens” to be invariant with respect to certain observational transformations. A relationship that is noninvariant will be discovered later and remain less firmly established. Three transformations are fundamental to varying extents.

So in natural science, OK, it’s a problem. But in economic problems Mandelbrot is saying it is a MAJOR problem.

Suppose you think that stock prices tend to move along randomly, with the changes plucked from a Log-normal distribution, which looks like this:

LogNormal(median=3,stddev=2).png

OK, so you start looking for data. And some of your data comes from daily prices, some from weekly prices, other from yearly price variations. But there’s a problem: there’s no simple way to describe the relationship between the daily and the weekly data. You might want to simply add up a bunch of the daily data to compare it with the weekly data (or to take the weekly data and divide it by 5).

Well, you can’t. The sum of a bunch of log-normal distributions is not another log-normal distribution. So if your theory is true and the log-normal distribution is what guides the stock market’s random price changes, things are weird. If I understand correctly (not at all sure that I do) then there are two reasons why this is weird. First, it means that your attempt to compare different sources of data is likely to be a mess, as there is no easy way to compare and combine the different sources. Second…shouldn’t the daily and weekly prices show the same distribution? Wouldn’t it be weird if daily and weekly prices were governed by different distributions?

There is something exceptionally nice about the idea that small small variation is reproduced at higher scaled. People come in all shapes and sizes; the deepest levels of physical reality are governed by molecules randomly humming and bumping into each other. The idea that these scales are connected — that what we see is the cumulative result of the way the smallest things are — is highly attractive in both nature and economics. I think that this is a potential connection to the idea of fractals and self-similarity.

So, what are the ways that data might need to be stable when it’s transformed? Mandelbrot names three:

  1. Stable after being aggregated
  2. Stable after being mixed
  3. Stable when you only pay attention to the extremes

The first source of stability is the most important. Here’s an example Mandelbrot gives:

The distributions of aggregate incomes are better known than the distributions of each kind of income taken separately.

So if we are interested in the total distribution of income, we might be looking at a collection of different categories of income. Honestly, I can only guess what these categories might be. People sometimes talk about three forms of income (active, portfolio, passive) so maybe he means that? Or maybe he means that you have income distributions for each of the 50 US states, and you want to aggregate that into a national income distribution?

Anyway:

There is actually nothing new in my emphasis on invariance under aggregations. It is indeed well known that the sum of two independent Gaussian variables is itself Gaussian, which helps use Gaussian “error terms” in linear models. However, the common belief that only the Gaussian is invariant under aggregation is correct only if random variables with infinite population moments are excluded, which I shall not do (see Section V). Moreover, the Gaussian distribution is not invariant under our next two observational transformations.

This is indeed a very nice thing about the Gaussian distribution! You add them together, you get another one. Lots of little measurement mistakes add up to a Gaussian distribution of final measurements. (Lots of little differences at the cellular level lead to big differences in people.)

Mandelbrot is saying, you’d want this to remain true in your economic models. It would make the problems tractable to research; the prices and things really ought to add in this way too. And the vast majority of distributions (those “analytic equations” cited above) do NOT have this property. But Mandelbrot is going to argue in favor of an especially favorite family of distributions that do have this additive property — along with the invariance under the other two kinds of transformations.

These are the distributions that are called “Stable Distributions,” and include the Pareto distributions and the Gaussian distributions as members.

Levy_distributionPDF.png

More on the math of transforming various statistical distributions in Part 2.