Review: The Weil Conjectures by Karen Olsson

The Weil Conjectures by Karen Olsson

I. 

I liked that this was a book full of questions, and I also liked the questions. Here are some of the questions she asks in the book:

  • Why do people like math?
  • Why did I (the author Karen Olsson) like math when I studied it in college? Even though I was an aspiring novelist?
  • Why was Simone Weil — philosopher, writer, mystic — attracted to mathematics?
  • What’s the deal with Simone’s relationship with her brother (famed mathematician) Andre?
  • Where do mathematical ideas come from?
  • What do (the author Karen Olsson) get out of abstract math now that I’m no longer swimming in it?
  • Do analogies for abstract mathematical ideas do a person any good if the math itself isn’t accessible to them?

It wouldn’t be fair to Olsson or the book to reduce it to a neat set of answers to those questions. The book is structured so as to provide an experience that is a lot like the experience of learning abstract math. The Weil Conjectures suppose a connection between two mathematical domains — topology and number theory — and Olsson wants (as far as I understand) to create a literary experience that is analogous to the search for such domain-bridging mathematical connections. So she lays out the Weil biography, her own memoir, mathematics and writing about mathematics for the reader. And I think she really succeeds — as the book goes on it feels a bit like learning some deep bit of theory.

So it’s not fair to reduce the book to a neat set of answers because the book is primarily about the experience of reading it. Some books are like that, and that’s fine. But she does answer some of those questions in interesting ways, and memory is necessarily structured, so it’s worth trying to say a bit about what Olsson says about mathematics itself.

II.

Q: What do people love about abstract mathematics?

A: Attraction to the unknown itself.

That, I think, is as close as we can get to Olsson’s answer in brief. The mathematician is someone who desires to create unknowns and to put obstacles in the way of their knowing so that they can search for answers. So for instance we hear about a Kafka story (“The Top”) about a philosopher who seeks enlightenment by hoping to catch spinning tops in mid-spin (whatever). On this story Anne Carson says “he has become a philosopher (that is one whose profession is to delight in understanding) in order to furnish himself with pretexts for running after tops.” That’s what a mathematician is — they love the chase.

Q: Is this so different from what a writer does?

A: No.

And so there is a connection between Olsson’s 2.5 years studying mathematics in college and her life as a writer.

I love this take on mathematics — that it’s about this love of being in the dark and searching for light. So it’s more about finding light than the light itself, if that makes sense.

This should be seen (I think) in contrast with writers who make much of the beauty of mathematics, or the search for beauty. Olsson is good on this. Twenty years after finishing her degree she decides to go back and watch some online lectures for an Abstract Algebra class. She finds it remote and foreign, but also:

And still, it was beautiful. I’m ambivalent about expressing it that way — “beauty” in math and science is something people tend to honor rather vaguely and pompously–instead maybe I should say that still, it was very cool. (This is something the course’s professor, Benedict Gross, might say himself, upon completing a proof: “Cool? Very cool.”) A quality of both good literature and good mathematics is that they may lead you to a result that is wholly surprising yet seems inevitable once you’ve been shown the way, so that–aha!–you become newly aware of connections you didn’t see before.

Still, the mathematician’s next move is to plunge themselves into darkness. This comes from a desire towards something that cannot be grasped.

This part was hard for me to understand. A key for Olsson seems to be Anne Carson’s Eros the Bittersweet but the theory isn’t entirely clicking for me. “A mood of knowledge is emitted by the spark that leaps in the lover’s soul,” she writes but I don’t quite get. Olsson’s take: “It’s not the knowledge itself, not consummation but the mood, the excitement when you are on the verge of grasping.”

What I understand Olsson to be saying is that the main fun of math isn’t the understanding but the feeling that understanding might be near. And that explains the pleasure we non-experts get out of mathematical analogies. There’s nothing unusual about the idea that analogies give us the thrill of desire — what’s more novel is saying this isn’t so different than the usual state of a mathematician.

For this point she goes to the act of mathematical creation itself, and how unpredictable it is:

What does mathematical creation consist of? asks Poincare, who blazed his way through a large territory of mathematics and physics by relying on his remarkable geometric intuition. It requires not only the combining of existing facts but the avoiding of useless combinations: making the right choices. The facts worthy of study are those that reveal unsuspected relationships between other facts. Moreover, much of this combining and discarding and retrieving goes on without the mathematician’s full awareness, occurring instead behind the scrim of consciousness.

Since the mathematician is dependent on their unconscious associations, mathematical discovery is not entirely in their control. In fact, many suspected relationships don’t work out. And so the mathematician spends most of their time afflicted with that same desire for the unreachable that afflicts we, the non-technical lay audience, who only get analogies.

Analogy becomes a version of eros, a glimpse that sparks desire. “Intuition makes much of it; I mean by this the faculty of seeing a connection between things that in appearance are completely different; it does not fail to lead us astray quite often.” This of course, describes more than mathematics; it expresses an aspect of thinking itself–how creative thought rests on the making of unlikely connections. The flash of insight, how often it leads us off course, and still we chase after it.

It’s a neat picture, I think!

I’ve left out all the connections to mysticism, the biographical details of Simone and Andre, and (nearly) all the connections to writing, but that’s in there too. Again, very neat stuff.

III.

I also learned from this book that Brouwer retired early and practiced nudism, that Flannery O’Connor didn’t particularly care for Simone Weil’s writing, and of Hadamard’s fascinating book “The Psychology of Invention in the Mathematical Field.”

One last good quote: “Honestly I think I understand anyone else’s dislike of math better than I understand whatever hold math has had on me.”

Equations and Equivalence in 3rd Grade

So I was stupidly mouthing off online to some incredibly serious researchers about equivalence and the equals sign and how it’s not that hard of a topic to teach when — OOPS! — my actual teaching got in the way.

I had done the right thing. In my 3rd Grade class I wanted to introduce “?” as a symbol for an unknown so I put up some equations on the board:

15 = ? x 5

3 + ? = 10

10 + 3 = 11 + ?

And I was neither shocked, nor did I blink, when a kid told me that the last equation didn’t make any sense. Ah, I thought, time to nip this in the bud.

I listened to the child and said I understood, but that I would like to share how it does make sense. I asked whether anyone knew what the equals sign meant, and one kid says “makes” and the next said “the same as.” Wonderful, I said, because that last equation is just saying the left side equals the same as the right side. So what number would make them the same? 2? Fantastic, let’s move on.

Then, the next day, I put a problem on the board:

5+ 10 = ___ + 5

And you know what comes next, right? Consensus around the room is that the blank is 15. “But didn’t we say yesterday that the equals sign means ‘the same as’?” I asked. A kid raised her hand and explained that it did mean that, but the answer should still be 15. Here’s how she wanted us to read the equation, as a run on:

(5 + 10 =  15 ) + 5

Two things were now clear to me. First, that my pride in having clearly and decisively taken care of this issue was misguided. I needed to do more and dig into this more deeply.

The second thing is that isn’t this interesting? You can have an entirely correct understanding of the equals sign and still make the same “classic” mistakes interpreting an actual equation.

I think this helps clear up some things that I was muddling in my head. When people talk about the need for kids to have a strong understanding of equivalence they really are talking about quite a few different things. Here are the two that came up above:

  • The particular meaning of the equals sign (and this is supposed to entail that an equation can be written left-to-right or right-to-left, i.e. it’s symmetric)
  • The conventional ways of writing equations (e.g. no run ons, can include multiple operations and terms on each side)

But then this is just the beginning, because frequently people talk about a bunch of other things when talking about ‘equivalence.’ Here are just a few:

  • You can do the same operations to each side (famously useful for solving equations)
  • You can manipulate like terms on one side of an equation to create a true equation (10 + 5 can be turned into 9 + 6 can be turned into 8 + 7; 8 x 7 can be turned into 4 x 14; 3(x + 4) can be turned into 3x + 12, etc.)

When a kid can’t solve 5 + 10 = ___ + 9 correctly or easily using “relational understanding,” this is frequently blamed on a kid’s understanding of the equals sign, equivalence or the particular ways of relating 5 + 10 to __ + 9. But now I’m seeing clearly that these are separate things, and some tend to be easier for kids than others.

So, this brings us to the follow-up lesson with my 3rd Graders.

I started as I usually do in this situation, by avoiding the equals sign. I find that a double arrow serves this purpose well, so I put up an arrow relationship on the board:

2 x 6 <–> 8 + 4

I pointed out that 2 x 6 makes 12 and so does 8 + 4. Could the kids come up with other things like this, I asked?

They did. I didn’t grab a picture, but I was grateful that all sorts of things came up. Kids were mixing operations nicely, like 12 – 2 <–> 5 + 5, in general it felt like this was not hard, kids knew exactly what I meant and could generate lots of ideas.

My next move was to pause and introduce the equals sign into this conversation. Would anyone mind if I replaced that double arrow with an equals sign? This is just what the equal sign means, anyway. No problem, that went fine also.

Kids were even introducing great examples like 1 x 2 = 2 x 1, or 12 = 12. Wonderful.

Then, I introduced the task of the day, in the style of Open Middle (R) (TM) (C):

IMG_3548.jpg

Yeah, I quickly handwrote it with a sharpie. It was that sort of day.

I carefully explained the constraints. 10 – 2 + 7 + 1 was a true equation, but wouldn’t work for this puzzle. Neither would 15 – 5 = 6 + 4. And then I gave the kids time to search for solutions, as many as they could find.

Bla bla, most kids were successful, others had trouble getting started but everyone eventually had some success. Here are some pictures of students who make me look good:

IMG_3554.jpg

IMG_3552-1961530330-1569718362990.jpg

Here is a picture of a student who struggled, but eventually found a solution:

IMG_3549.jpg

Here is a picture of the student from the class I was most concerned with. You can see the marks along his page as he tries to handle things like 12 – 9 as he tries subtracting different numbers from 12. I think there might have been some multiplying happening on the right side, not sure why. Anyway:

IMG_3553.jpg

The thing is that just the day before, this last student had almost broken down in frustration over his inability to make sense of these “unconventional” equations. So this makes me look kind of great — I did it! I taught him equivalence, in roughly a day. Tada.

But I don’t think that this is what’s going on. The notion of two different things being equal, that was not hard for him. In fact I don’t think that notion is difficult for very many students at all — kids know that different additions equal 10. And it was not especially difficult for this kid to merge that notion of equivalence with the equals sign. Like, no, he did not think that this was what the equals sign meant, but whatever, that was just on the basis of what he had thought before. It’s just a convention. I told him the equals sign meant something else, OK, sure. Not so bad either.

The part that was very difficult for this student, however, was subtracting stuff from 12.

Now this is what I think people are talking about when they talk about “relational understanding.” It’s true — I really wish this student knew that 10 + 2 <–> 9 + 3, and so when he saw 12 he could associate that with 10 + 2 and therefore quickly move to 9 + 3 and realize that 12 – 3 = 9. I mean, that’s what a lot of my 3rd Graders do, in not so many words. That is very useful.

So to wrap things up here are some questions and some provisional answers:

Q: Is it hard to teach or learn the concept of equivalence.

A: No.

Q: Is it hard to teach the equals sign and its meaning?

A: It’s harder, but this is all conventional. If you introduce a new symbol like “<–>” I don’t think kids trip up as much. They sometimes have to unlearn what they’ve inferred from prior experiences that were too limited (i.e. always putting the result on the right side). So you’re not doing kids any favors by doing that, it’s good to put the equations in a lot of different forms, pretty much as soon as kids see equations from the first time in K or 1st Grade. I mean why not?

Q: If kids don’t learn how equations conventionally work will that trip them up later in algebra?

A: Yes. But all of my kids find adding and subtracting itself to be more difficult than understanding these conventions. My sense is that you don’t need years to get used to how equations work. You need, like, an hour or two to introduce it.

Q: Does this stuff need to be taught early? Is algebra too late to learn how equations work?

A: I think kids should learn it early, but it’s not too late AT ALL if they don’t.

I have taught algebra classes in 8th and 9th Grade where students have been confused about how equations work. My memories are that this was annoying because I realized too late what was going on and had to backtrack. But based on teaching this to younger kids, I can’t imagine that it’s too late to teach it to older students.

I guess it could be possible that over the years it gets harder to shake students out of their more limited understanding of equations because they reinforce their theory about equations and the equals symbol. I don’t know.

I see no reason not to teach this early, but I think it’s important to keep in mind that in middle school we tell kids that sometimes subtracting a number makes it bigger and that negative exponents exist. Kids can learn new things in later years too.

Q: So what makes it so hard for young kids to handle equations like 5 + 10 = 6 + __?

A: It’s definitely true that kids who don’t understand how to read this sort of equation will be unable to engage at all. But the relational thinking itself is the hardest part to teach and learn, it seems to me.

Here is a thought experiment. What if you had a school or curriculum that only used equal signs and equations in the boring, limited way of “5 + 10 = ?” and “6 x ? = 12” throughout school, but at the same time taught relational thinking using <–> and other terminology in a deep and effective way? And then in 8th Grade they have a few lessons teaching the “new” way of making sense of the equals sign? Would that be a big deal? I don’t know, I don’t think so.

Q: There is evidence that suggests learning various of the above things helps kids succeed more in later algebra. Your thoughts?

A: I don’t know! It seems to me that if something makes a difference for later algebra, it has to be either the concept of equivalence, the conventions of equations, or relational thinking.

I think the concept of equivalence is something every kid knows. The conventions of equations aren’t that hard to learn, I think, but they really only do make sense if you connect equations to the concept of equivalence. The concept of equivalence explains why equations have certain conventions. So I get why those two go together. But could that be enough to help students with later algebra experiences? Maybe. Is it because algebra teachers aren’t teaching the conventions of equations in their classes? Would there still be an advantage from early equation experience if algebra teachers taught it?

In the end, it doesn’t matter much because young kids can learn it and so why bother not teaching it to them? Can’t hurt, only costs you an hour or two.

But the big other thing is relational thinking. Now there is no reason I think why relational thinking has to take place in the context of equations. You COULD use other symbols like double arrows or whatever. But math already has this symbol for equivalence, so you might as well teach relational thinking about addition/subtraction/multiplication/division in the context of equations. And that’s some really tricky, really important mathematics to learn. A kid being able to understand that 2 x 14 is equal to 4 x 7 is important stuff.

It’s important for so many reasons, for practically every reason that arithmetic is the foundation of algebra. I can’t list them now — but it goes beyond equations, is my point. Relational thinking (e.g. how various additions relate to each other) is huge and hugely important.

Would understanding the conventions of the equals sign and equations make a difference in the absence of experiences that help kids gain relational understanding? Do some kids start making connections on their own when they learn ways of writing equations? Does relational understanding instruction simply fail because kids don’t understand what the equations their teachers are using mean?

I don’t know.

Introducing Stable Distributions

The story so far:

  • There are lots of ways to put two functions together and get a new one out of the process.
  • Addition is one of these ways. Multiplication is another.
  • If you add two Gaussian functions together, you don’t get another Gaussian function.
  • If you multiply two Gaussian functions together, guess what? You get another Gaussian function.
  • Convolution is another way of combining two functions.
  • If you convolve two Gaussian functions together, guess what? You get another Gaussian function.

In the last post I tried to explain why convolving two probability distributions produces the distribution of the sum of those variables. And that sum is guaranteed to also be a bell curve, as long as the distributions its made of come are distributed normally as well.

It’s worth stewing for a moment on what that means, because a lot of things that we care about can be thought of as sums of random variables.

One example is height. Take, for example, the height of a forest. I know nothing about the biology of forest height, but I did find a few figures online. (Yes, random searching.)

Here they are.

forests-06-03899-g008.png

Some of these plots look normal. Then again, some of them don’t. I don’t really care! Let’s pretend that forest height truly is normally distributed. Why would that be?

The thing is that there are a lot of factors that go into how tall a forest is height. There is underlying genetic variation in the families of trees that make a forest. There is underlying randomness is the environmental conditions where a forest grows, and “environmental conditions” is itself not a single factor but another collection of random variables — rain, soil, etc.

If we were going to make a list of factors that go into forest height how long do you think it would be before we had an exhaustive list? Hundreds of factors? Thousands?

Given all this, “forest height” really isn’t a single random variable. It’s a collection of random variables, not the way we think of a single coin flip.

(AND IS A COIN FLIP EVEN REALLY A SINGLE RANDOM EVENT??? OOOOOOOH DID I BLOW YOUR MIND?)

This all feels very complicated. How is it that a forest’s height has such a lovely normal distribution?

Given the math in the previous post we have a very tidy answer: forest height is thought of as a sum of random variables. Even if there are hundreds or thousands of underlying random variables that sum up to forest height, as long as all (most?) of them are themselves Gaussian, so will their sum, represented by the convolution of all those thousands of distributions.

(Note: we did not prove that convolution preserves the Gaussian nature of a distribution for an arbitrary number of distributions. But if you take two of those distributions they make a new Gaussian, and then pair that with another distribution, and then another, etc., so on, it’s a pretty direct inductive argument that you can convolve as many of these as you’d like and still get a Gaussian at the end of things.)

So convolving is nice because it gives us a tidy way to think of why these incredibly complex things like “forest height” could have simple distributions.

But there’s only one problem: who says that you’re starting with a Gaussian distribution?

Convolution does not always preserve the nature of a function. To pick an example solely based on how easy it was for me to calculate, start with a very simple function:

f(x) = 2x

Define this only on the region [0,1] so that it really can be thought of as a probability distribution. Then, convolve it with itself.

Screenshot 2019-09-03 at 10.14.15 PM.png

(Hey, why is this only the first half of the convolution? Because for the life of me I can’t figure out how to visualize the second half of this integral while limiting the domain of the original function. Please, check my work, I am actually pretty sure that I’m exposing an error in my thinking in this post but hopefully someone will help me out!)

In any event, what you get as a result of this convolution is certainly not linear. So linearity of a distribution is not preserved by convolution.

What this means is that whether convolution can explain the distribution of complicated random variables that are the sum of simpler random things really depends on what the distribution we’re dealing with is. We are lucky if the distribution is Gaussian, because then our tidy convolution explanation works. But if the distribution of the complicated random variable was linear (what sort of thing even could) then this would present a mystery to us.

But is it just Gaussian distributions that preserve their nature when summed? If so, this would be pretty limiting, because certainly not every distribution that naturally occurs is a normal distribution!

The answer is that it’s not just Gaussian distributions. There is a family of distributions that is closed under convolution, and they are called the stable distributions.

***

I started trying to understand all of this back when I tried to read a paper of Mandelbrot’s in the spring. Getting closer!

This post doesn’t follow any particular presentation of the ideas, but a few days ago I read the second chapter of Probability Tales and enjoyed it tremendously. There are some parts I didn’t understand but I’m excited to read more.

Another way to smoosh two Bell Curves together

Previously, we looked at one way of combining two Bell Curve (i.e. Gaussian distributions) together to make a third — multiplication.

There are other ways to do this, though. The best known (and as far as I can tell, the most important) is convolution. So, here are two Gaussian distributions, and what you get when you convolute(?) them:

Screenshot 2019-08-25 at 1.28.42 PM

Screenshot 2019-08-25 at 1.28.55 PM

Formally, this process takes two functions — f(x), g(x) — and then produces a new distribution defined in the following way:

(f * g)(t) = \int_{-\infty}^{+\infty}f(x)g(t-x)dx

I have been struggling with this idea for several months, but just a few days ago I made some progress.

Convolution is often described as a blurring process — it blurs one distribution according to another one. That’s how Terry Tao describes it in this Math Overflow post:

If one thinks of functions as fuzzy versions of points, then convolution is the fuzzy version of addition (or sometimes multiplication, depending on the context). The probabilistic interpretation is one example of this (where the fuzz is a a probability distribution), but one can also have signed, complex-valued, or vector-valued fuzz, of course.

I have a hard time seeing the “blurring” in the images above. To really see it, I have to change the initial functions. For example, consider the convolution of a Gaussian and linear function (with restricted domain). Before convolution…

Screenshot 2019-08-25 at 1.46.17 PM

…and after.

Screenshot 2019-08-25 at 1.46.27 PM

Just playing around with the calculator a bit more, here is another before/after pair.

Screenshot 2019-08-25 at 2.05.54 PM

Screenshot 2019-08-25 at 2.06.14 PM.png

I first learned about convolution a few months ago, and it was explained to me in terms of blurring. The thing about this “intuition building” metaphor is that I’ve been sitting with it since then, and it hasn’t helped me feel comfortable with it at all. It was only last week when I came across the far more prosaic meaning of convolution that things started to click for me. Because besides for whatever blurring convolution represents, it also represents the sum of two independent random things.

(What follows is lifted from this excellent text.)

Suppose you have two dice, both six-sided, both fair. There is an equal chance of rolling 1, 2, 3, 4, 5 or 6 with each die — a uniform probability distribution. P(x) = \frac{1}{6} whether x = 1, 2, 3, 4, 5, 6.

What would the distribution of the sums of the rolls look like?

The calculations start relatively simply, but check out the structure. For example, this is the calculation we have to do to find the chances of rolling a a 3:

P(1)P(2) + P(2)P(1)

The chances of rolling a 4:

P(1)P(3) + P(2)P(2) + P(3)P(1)

The chances of rolling a 5:

P(1)P(5-1) + P(2)P(5-2) + P(3)P(5-3) + P(4)P(5-4)

The chances of rolling n:

P(1)P(n - 1) + P(2)P(n - 2) + ... + P(n-1)P(n - (n- 1))

So! Using the language of summation, we can summarize this process as so:

P(sum = n) = \sum P(k) \cdot P(n - k)

We might as well take the last step of calling this process “convolution,” because it’s just the discrete version of the integral from above!:

(f * g)(t) = \int_{-\infty}^{+\infty}f(x)g(t-x)dx

***

Mathematically, there is a lot of interesting stuff to continue exploring with convolutions. Not all convolutions are defined, there’s a connection to Fourier transforms (another thing I don’t understand yet), there are discrete problems to solve in the text (what about different dice?), and so on and so on.

Briefly, though: why didn’t the blurring metaphor help me? I don’t think it’s such a mystery. It’s because while blurring is easy to understand, that image was totally disconnected from the underlying calculation. Why should that complicated integral be related to the process of blurring?

Now, I have a better understanding. (Blurring X and Y sort of is like finding the probability of X + Y.)

In school math, a lot of teachers bemoan their students’ lack of conceptual understanding, and it’s generally felt that procedural understanding is more obtainable. In my life as a learner of mathematics I usually feel it’s the other way around. When I read articles in Quanta Magazine or books about mathematics I haven’t yet studied I can usually follow the exposition but am left feeling a bit empty. Yes, I can follow the metaphors (“Imagine a number as a little bird; those birds fly together in flocks; but what happens when a bird has children? do they rejoin the flock? where? etc.”) but what have I learned?

Popular exposition of mathematics is maybe more difficult than exposition of other subjects simply because of the necessity of some kind of metaphor that brings the abstract to life. What results is a kind of understanding, but something far from the whole thing, and the best mathematical exposition also leaves me feeling jealous of those who can reach past the metaphors and grasp the thing itself.

That’s not to say that math exposition for popular audiences isn’t valuable — it is! Most people aren’t ever going to reach that deeper, unified understanding. I certainly won’t, most of the time! But for convolution, I feel a step closer.

Gaussians are making Gaussians

Let f and g be Gaussian distributions.

Screenshot 2019-08-23 at 12.01.12 PM.png

Go ahead, add them. You don’t get another Gaussian distribution.

Screenshot 2019-08-23 at 12.04.42 PM.png

Well, of course not. They don’t have the same mean. So set the means equal.

Screenshot 2019-08-23 at 12.06.48 PM.png

That’s no better. The sum of f and g is still very much not Gaussian.

Screenshot 2019-08-23 at 12.07.58 PM.png

So, that’s no good. But of course it failed — just look at those visuals!

What about multiplication? Here’s what the product of two Gaussian distributions with equal means looks like.

Screenshot 2019-08-23 at 12.10.36 PM.png

That looks much better!

In fact this is true: the product of two Gaussians distributions remains a Gaussian function. The only proofs I know of dive into some algebra — I like this one — but the core idea is that multiplying exponents is additive. That’s what keeps it all in the Gaussian family.

So consider two Gaussian functions, one with a mean \mu and the other with a mean at 0 (for a touch of simplicity):

f(x) = \frac{1}{\sqrt{2\pi}\sigma_f} e^{\frac{x^2}{2\sigma^2_f}}

g(x) = \frac{1}{\sqrt{2\pi}\sigma_g}  e^{\frac{(x -\mu)^2}{2\sigma^2_g}}

Their product will look like this:

f(x)g(x) = \frac{1}{2\pi\sigma_f\sigma_g} e^{\frac{x^2}{2\sigma^2_f}+\frac{(x-\mu)^2}{2\sigma^2_g}}

Making common denominators and adding through:

f(x)g(x) = \frac{1}{2\pi\sigma_f\sigma_g} e^{\frac{\sigma^2_g x^2 +\sigma^2_f(x -\mu)^2}{2\sigma^2_f\sigma^2_g}}

Might as well expand that exponent a bit and summarize:

f(x)g(x) = \frac{1}{2\pi\sigma_f\sigma_g} e^{\frac{(\sigma^2_f +\sigma^2_g) x^2 -2\sigma^2_f x \mu + \sigma^2_f \mu^2}{2\sigma^2_f\sigma^2_g}}

And then you can divide the numerator and denominator by (\sigma^2_f +\sigma^2_g) and you’ll end up with a quadratic trinomial in x. You can always express that quadratic trinomial as (x - M)^2 somehow or another.

(Brief but important nit-picky note: that would make the product of two Gaussian function, but the scale factor on the left side of the expression is off, so it’s not a Gaussian distribution. You’d have to scale the product of two Gaussian distribution in order to get another Gaussian distribution.)

Is this useful? Is this significant, in some way? I don’t know. Apparently it’s useful in applying Bayes’ Theorem, but I know nothing about that.

One thing I do know is that it makes for some fun visuals.

Screenshot 2019-08-23 at 1.09.12 PM.png

For terrific math puzzles check out Erich Friedman’s website

In the last few days of camp this summer, a big folder of puzzles got posted in the hall. In the folder was a collection of Erich Friedman’s Hamiltonian Mazes puzzles.

Screenshot 2019-08-18 at 8.09.22 AM.png

These puzzles are terrific (hard!) and it’s just one of many many different types of puzzles and problems on Erich’s site.

The whole site is a terrific snapshot of the old internet, with its generosity and quirk. I love the little personal nuggets Erich includes on his homepage (I’m nostalgic for homepages!):

I am a Libertarian and an Atheist. I consider myself a Feminist, and I’m a member of the ACLU. I have memorized the first 50 digits of Pi. I am an INTJ and I Juggle. I build card houses, and I’m interested in my Family Tree.

For the record: I am a Capitalist and a Religious Jew. I consider myself a Feminist, and I’m a patron of NYPL. I recently memorized the Largest Known Prime Number. I sometimes get Moody and Sad but I don’t Juggle. I’m interested in my Family.

I own the largest Puzzle Collection in Florida.

Really, lots of wonderful stuff on his site. In the past I’ve been more interested in theory-laden areas of math than puzzles and problems (I like my math how I like my philosophy) but I always have fun when I do find time for these sorts of things.

Like these square tilings. They’re gorgeous!

s9

Anyway, a wonderful website. Enjoy!

The opening chapter of the novel RED PLENTY is all about mathematical abstraction

RED PLENTY by Francis Spufford was so good. A great deal of the novel is about the frustrated attempt by Soviet economists and mathematicians to reform the Russian economy.

The book opens on Leonid Vitalevich, about to discover linear programming:

Today he had a request from the Plywood Trust of Leningrad. “Would the comrade professor, etc. etc. grateful for any insight, etc. etc., assurance of cordial greetings, etc. etc.’ It was a work-assignment problem. The Plywood Trust produced umpteen different types of plywood using umpteen different machines, and they wanted to know how to direct their limited stock of raw materials to the different machines so as to get the best use out of it. Leonid Vitalevich had never been to the plywood factory, but he could picture it. It would be like all the other enterprises which had sprung up around the city over the last few years, multiplying like mushrooms after rain, putting chimnies at the end of streets, filing the air with smuts and the river with eddies of chemical dye…

To be honest, he couldn’t quite see what the machines were doing. He had only a vague idea of how plywood was actually manufactured. It somehow involved glue and sawdust, that was all he knew. It didn’t matter: for his purposes , he only needed to think of the machines as abstract propositions, each one effectively an equation in solid form, and immediately he read the letter he understood that the Plywood Trust, in its mathematical innocence, had sent him a classic example of a system of equations that was impossible to solve. There was a reason why factories around the world, capitalist or socialist, didn’t have a handy formula for these situations. It wasn’t just an oversight, something people hadn’t got around to yet. The quick way to deal with the Plywood Trust’s enquiry would have been to write a polite note explaining that the management had just requested the mathematical equivalent of a flying carpet or a genie in a bottle.

But he hadn’t written that note. Instead, casually at first, and then with sudden excitement, with the certainly that the hard light of genesis was shining in his head, brief, inexplicable, not to be resisted or questioned while it lasted, he had started to think. He had thought about ways to distinguish between better answers and worse answers to questions which had no right answer. He had seen a method which could do what the detective work of conventional algebra could not, in situations like the one the Plywood Trust described, and would trick impossibility into disclosing useful knowledge. The method depending on measuring each machine’s output of one plywood in terms of all the other plywoods it could have made. But again, he had no sense of plywood as a scractchy concrete stuff. That had faded into nothing, leaving only the pure pattern of the situation, of all situations in which you had to choose one action over another action. Time passed. The genesis light blinked off. It seemed to be night outside his office window. The grey blur of the winter daylight had vanished. The family would be worrying about him, starting to wonder if he had vanished too. He should go home. But he groped for his pen and began to write, fixing in extended, patient form – as patient as he could manage – what’d come to him first unseparated into stages, still fused into one intricate understanding, as if all its necessary component pieces were faces and angles of one complex polyhedron he’d been permitted to gaze at, while the light lasted; the amazing, ungentle light. He got down the basics, surprised to find as he drove the blue ink onward how rough and incomplete they seemed to be, spelt out, and what a lot of work remained.

It’s the optimism generated by ideas like these that are the true subject of the book, which is the story of the rise and fall of this optimism. The book points out that in a society governed by engineers it was mathematicians and abstract theoreticians that were the main sources of cultural idealism. (In contrast to a place like the US, he says, where lawyers rule the land and writers and artists are the main source of social idealism.)

If you know what happened to the Soviet economy you know the end of this story. The entire book presents itself as a kind of mathematical tragedy, the destruction of the idea of utopian abundance in a planned economy.

Straightedge and Compass

16th-century-compasses-BM-1344603001.jpg

John Donne’s A Valediction Forbidding Mourning ends with two lovers compared to the arms of a geometric compass over several stanzas:

Our two souls therefore, which are one, 

   Though I must go, endure not yet 

A breach, but an expansion, 

   Like gold to airy thinness beat. 




If they be two, they are two so 

   As stiff twin compasses are two; 

Thy soul, the fixed foot, makes no show 

   To move, but doth, if the other do. 




And though it in the center sit, 

   Yet when the other far doth roam, 

It leans and hearkens after it, 

   And grows erect, as that comes home. 




Such wilt thou be to me, who must, 

   Like th' other foot, obliquely run; 

Thy firmness makes my circle just, 

   And makes me end where I begun.

I came across this in Stephanie Burt’s book Don’t Read Poetry. She writes:

Each lover “leans and hearkens” after the other, as if Donne and his intimate friend, lover, or wife heard each other across the sea. The balanced eight-syllable lines, with their alternating rhymes, depend on each other too. Their closure seems “just” both mathematically and morally; in their mutual response, one or both of the lovers stands up, or becomes “erect” (yes it’s a penis joke).
If you yourself have ever felt unique or confused or confusing to others, especially in matters of the heart; if you have ever felt that your connection to somebody else–whether or not it is romantic, or exclusive, or recognized by the law–requires some explanation of deserves a passionate defense; if you have friends in a stubborn long-distance relationship; if you have been in any such situation, you might see Donne’s elaborate, challenging metaphors not as barriers to sincerity but as ways to achieve it, ways that take advantage of the tools–metaphor, indirection, complex syntax, rhythm–that we can find in poems. You might even, at least if you are looking for them, see in Donne’s great love poems, this one among them, defenses of what we now call queer relationships, relationships not sanctioned by custom or law, relationships most people in your own society can’t quite understand.

That image at the top, by the way, is a set of compasses held by the British Library from Donne’s time, the 16th century.

Mathematics that makes itself

Can something be true, just because you say it?

One example might be a promise. If you promise somebody that you’ll feed their cats…well, all of the sudden there is a promise there. The act of promising creates a promise. All of the sudden, there it is. It makes itself.

Anyway, maybe mathematics can sometimes pull off a trick like that. In 2003, MacKenzie and Millo argued that this is precisely what happened in financial markets with the Black-Scholes formula, a highly successful mathematical model used to find “correct” prices for a stock option:

Option pricing theory—a “crown jewel” of neoclassical economics—succeeded empirically not because it discovered preexisting price patterns but because markets changed in ways that made its assumptions more accurate and because the theory was used in arbitrage.

In other words, the use of the formula itself made the formula more reliable. It was a self-fulfilling mathematical model, a piece of mathematics that reshaped the world to conform to its assumptions. Wow.

(I found this interesting blog post that dives a bit deeper into the logic of a self-fulfilling equilibrium.)

If this feels eerie, it’s only because we’re forgetting how strange and self-referential the notion of predicting the markets really is: markets are hard to predict because they are predictions. This is a way that finance and economics is fundamentally unlike the natural sciences. In finance there is always the possibility that the scientist will influence the subject.

Black, Scholes, and Merton’s model did not describe an already existing world: when first formulated, its assumptions were quite unrealistic, and empirical prices differed systematically from the model. Gradually, though, the financial markets changed in a way that fitted the model. In part, this was the result of technological improvements to price dissemination and transaction processing. In part, it was the general liberalizing effect of free market economics. In part, however, it was the effect of option pricing theory itself. Pricing models came to shape the very way participants thought and talked about options, in particular via the key, entirely model‐dependent, notion of “implied volatility.” The use of the BSM model in arbitrage—particularly in “spreading”—had the effect of reducing discrepancies between empirical prices and the model, especially in the econometrically crucial matter of the flat‐line relationship between implied volatility and strike price.

To be clear, Ed Throp used option pricing to make a killing before the markets were influenced by Black-Scholes. So it’s not like the formula created its own reality entirely. The claim can only be one of degrees — that the model became more reliable, that the markets grew more like what the model predicted. I am unable to evaluate the evidence on its own and haven’t dived deeper into any of this literature but, huh, it makes you think doesn’t it?

It reminds me of Ben Blum-Smith’s excellent post about voting theory, where he suggests that mathematicians have at times gotten lost in their models and believed in them too strongly, more because of their mathematical properties than for any of their use in application. But what if — only at times, and only by degrees — your mathematical model could be its own fulfillment by changing the world to more closely accord to its predictions? Wouldn’t that be something.

On Benoit Mandelbrot’s “New Methods in Statistical Economics” [Part 1]

I’ve been reading Mandelbrot’s 1963 paper “New Methods in Statistical Economics.” It’s one of his many papers from the 1960s where he argues that people should pay more attention to non-Normal distributions (see my post). This is my attempt to explain the first two sections of his piece.

(My main purpose is to clarify things for myself here, so if you have questions or can explain issues with my exposition I would VERY VERY appreciate hearing from you!)

I come to the paper with two interests. First, Mandelbrot loudly argued that the price fluctuations of many commodities and securities is best thought of as nonGaussian. This is a paper where he makes that argument. That’s cool because finance is cool and important and interesting.

But the other reason is because just a few years after this, Mandelbrot became Mr. Fractal and published “How Long is the Coast of Britain?” This was the piece that popularized the notion of fractal dimension (something Mandelbrot calls a Trojan Horse for the way it represented a safe, neutral topic that served as a vector for his dimensional ideas).

But Mandelbrot also said that parts of this paper — which really seems to have nothing to do with fractals — represent the seeds of his geometric ideas. So, for example, he says this in the appendix of the reprinted edition of this 1963 paper:

The many footnotes in the original, except one, were easily integrated in the text. But Footnote 4 did not fit, and it cried out to be emphasized, because it was an early allusion to the theme of self-similarity that came to dominate my life and led to fractals. This footnote 4 read as follows:

“The various criteria of invariance used by physicists are somewhat different in principle from those I propose in economics. For example, the principle of relativity was not introduced to explain a complicated empirical relation, such as scaling. I am indebted to Harrison White for suggesting that I should stress the nuances between my methods and those of physics.”

So what does that have to do with fractals? It’s a subtle thing! Let’s dive in to the paper to see where he’s coming from. I’ll quote (sometimes at length) and then comment below the text.

Mandelbrot:

The approach I use to study the scaling distribution arose from physics. It occurred to me that, before attempting to explain an empirical regularity, it would be a good idea to make sure that this empirical identity is “robust” enough to be actually observed. In other words, one must first examine carefully the conditions under which empirical observation is actually practiced. The scholar observes in order to describe but the entrepreneur observes in order to act. Both know that most economic quantities can hardly ever be observed directly and are usually altered by manipulations.

Here is a thought experiment involving biology, not physics, but I think it illustrates the point. Suppose that you have a theory about trees: that the distribution of tree heights follows a nice bell curve. 

You are a lazy scientist that doesn’t want to measure anything, let’s say. Anyway, the world is large and you didn’t get funding so you can’t travel. Fortunately, a lot of other people have already done measuring! You discover this by googling.

But suddenly you run into a problem. Sure, some people have directly measured the heights of trees. But other people measured the total height of forests. That stinks! Sure, you can divide the forests by the number of trees to get an average tree height to put into your tree data. But that’s going to be messy.

So it’s a challenge to deal with data coming from many different sources. And if the data needed to explore your theory only could come from measuring every tree (and if you really can’t do that) then you probably should work on a different problem.

In most practical problems, very little can be done about this difficulty, and one must be content with whatever approximation of the desired data is available. But the analytical formulas that express economic relationships cannot generally be expected to remain unaffected when the data are distorted by the transformations to which we shall turn momentarily. As a result, a relationship will be discovered more rapidly, and established with greater precision, if it “happens” to be invariant with respect to certain observational transformations. A relationship that is noninvariant will be discovered later and remain less firmly established. Three transformations are fundamental to varying extents.

So in natural science, OK, it’s a problem. But in economic problems Mandelbrot is saying it is a MAJOR problem.

Suppose you think that stock prices tend to move along randomly, with the changes plucked from a Log-normal distribution, which looks like this:

LogNormal(median=3,stddev=2).png

OK, so you start looking for data. And some of your data comes from daily prices, some from weekly prices, other from yearly price variations. But there’s a problem: there’s no simple way to describe the relationship between the daily and the weekly data. You might want to simply add up a bunch of the daily data to compare it with the weekly data (or to take the weekly data and divide it by 5).

Well, you can’t. The sum of a bunch of log-normal distributions is not another log-normal distribution. So if your theory is true and the log-normal distribution is what guides the stock market’s random price changes, things are weird. If I understand correctly (not at all sure that I do) then there are two reasons why this is weird. First, it means that your attempt to compare different sources of data is likely to be a mess, as there is no easy way to compare and combine the different sources. Second…shouldn’t the daily and weekly prices show the same distribution? Wouldn’t it be weird if daily and weekly prices were governed by different distributions?

There is something exceptionally nice about the idea that small small variation is reproduced at higher scaled. People come in all shapes and sizes; the deepest levels of physical reality are governed by molecules randomly humming and bumping into each other. The idea that these scales are connected — that what we see is the cumulative result of the way the smallest things are — is highly attractive in both nature and economics. I think that this is a potential connection to the idea of fractals and self-similarity.

So, what are the ways that data might need to be stable when it’s transformed? Mandelbrot names three:

  1. Stable after being aggregated
  2. Stable after being mixed
  3. Stable when you only pay attention to the extremes

The first source of stability is the most important. Here’s an example Mandelbrot gives:

The distributions of aggregate incomes are better known than the distributions of each kind of income taken separately.

So if we are interested in the total distribution of income, we might be looking at a collection of different categories of income. Honestly, I can only guess what these categories might be. People sometimes talk about three forms of income (active, portfolio, passive) so maybe he means that? Or maybe he means that you have income distributions for each of the 50 US states, and you want to aggregate that into a national income distribution?

Anyway:

There is actually nothing new in my emphasis on invariance under aggregations. It is indeed well known that the sum of two independent Gaussian variables is itself Gaussian, which helps use Gaussian “error terms” in linear models. However, the common belief that only the Gaussian is invariant under aggregation is correct only if random variables with infinite population moments are excluded, which I shall not do (see Section V). Moreover, the Gaussian distribution is not invariant under our next two observational transformations.

This is indeed a very nice thing about the Gaussian distribution! You add them together, you get another one. Lots of little measurement mistakes add up to a Gaussian distribution of final measurements. (Lots of little differences at the cellular level lead to big differences in people.)

Mandelbrot is saying, you’d want this to remain true in your economic models. It would make the problems tractable to research; the prices and things really ought to add in this way too. And the vast majority of distributions (those “analytic equations” cited above) do NOT have this property. But Mandelbrot is going to argue in favor of an especially favorite family of distributions that do have this additive property — along with the invariance under the other two kinds of transformations.

These are the distributions that are called “Stable Distributions,” and include the Pareto distributions and the Gaussian distributions as members.

Levy_distributionPDF.png

More on the math of transforming various statistical distributions in Part 2.