For terrific math puzzles check out Erich Friedman’s website

In the last few days of camp this summer, a big folder of puzzles got posted in the hall. In the folder was a collection of Erich Friedman’s Hamiltonian Mazes puzzles.

Screenshot 2019-08-18 at 8.09.22 AM.png

These puzzles are terrific (hard!) and it’s just one of many many different types of puzzles and problems on Erich’s site.

The whole site is a terrific snapshot of the old internet, with its generosity and quirk. I love the little personal nuggets Erich includes on his homepage (I’m nostalgic for homepages!):

I am a Libertarian and an Atheist. I consider myself a Feminist, and I’m a member of the ACLU. I have memorized the first 50 digits of Pi. I am an INTJ and I Juggle. I build card houses, and I’m interested in my Family Tree.

For the record: I am a Capitalist and a Religious Jew. I consider myself a Feminist, and I’m a patron of NYPL. I recently memorized the Largest Known Prime Number. I sometimes get Moody and Sad but I don’t Juggle. I’m interested in my Family.

I own the largest Puzzle Collection in Florida.

Really, lots of wonderful stuff on his site. In the past I’ve been more interested in theory-laden areas of math than puzzles and problems (I like my math how I like my philosophy) but I always have fun when I do find time for these sorts of things.

Like these square tilings. They’re gorgeous!

s9

Anyway, a wonderful website. Enjoy!

The opening chapter of the novel RED PLENTY is all about mathematical abstraction

RED PLENTY by Francis Spufford was so good. A great deal of the novel is about the frustrated attempt by Soviet economists and mathematicians to reform the Russian economy.

The book opens on Leonid Vitalevich, about to discover linear programming:

Today he had a request from the Plywood Trust of Leningrad. “Would the comrade professor, etc. etc. grateful for any insight, etc. etc., assurance of cordial greetings, etc. etc.’ It was a work-assignment problem. The Plywood Trust produced umpteen different types of plywood using umpteen different machines, and they wanted to know how to direct their limited stock of raw materials to the different machines so as to get the best use out of it. Leonid Vitalevich had never been to the plywood factory, but he could picture it. It would be like all the other enterprises which had sprung up around the city over the last few years, multiplying like mushrooms after rain, putting chimnies at the end of streets, filing the air with smuts and the river with eddies of chemical dye…

To be honest, he couldn’t quite see what the machines were doing. He had only a vague idea of how plywood was actually manufactured. It somehow involved glue and sawdust, that was all he knew. It didn’t matter: for his purposes , he only needed to think of the machines as abstract propositions, each one effectively an equation in solid form, and immediately he read the letter he understood that the Plywood Trust, in its mathematical innocence, had sent him a classic example of a system of equations that was impossible to solve. There was a reason why factories around the world, capitalist or socialist, didn’t have a handy formula for these situations. It wasn’t just an oversight, something people hadn’t got around to yet. The quick way to deal with the Plywood Trust’s enquiry would have been to write a polite note explaining that the management had just requested the mathematical equivalent of a flying carpet or a genie in a bottle.

But he hadn’t written that note. Instead, casually at first, and then with sudden excitement, with the certainly that the hard light of genesis was shining in his head, brief, inexplicable, not to be resisted or questioned while it lasted, he had started to think. He had thought about ways to distinguish between better answers and worse answers to questions which had no right answer. He had seen a method which could do what the detective work of conventional algebra could not, in situations like the one the Plywood Trust described, and would trick impossibility into disclosing useful knowledge. The method depending on measuring each machine’s output of one plywood in terms of all the other plywoods it could have made. But again, he had no sense of plywood as a scractchy concrete stuff. That had faded into nothing, leaving only the pure pattern of the situation, of all situations in which you had to choose one action over another action. Time passed. The genesis light blinked off. It seemed to be night outside his office window. The grey blur of the winter daylight had vanished. The family would be worrying about him, starting to wonder if he had vanished too. He should go home. But he groped for his pen and began to write, fixing in extended, patient form – as patient as he could manage – what’d come to him first unseparated into stages, still fused into one intricate understanding, as if all its necessary component pieces were faces and angles of one complex polyhedron he’d been permitted to gaze at, while the light lasted; the amazing, ungentle light. He got down the basics, surprised to find as he drove the blue ink onward how rough and incomplete they seemed to be, spelt out, and what a lot of work remained.

It’s the optimism generated by ideas like these that are the true subject of the book, which is the story of the rise and fall of this optimism. The book points out that in a society governed by engineers it was mathematicians and abstract theoreticians that were the main sources of cultural idealism. (In contrast to a place like the US, he says, where lawyers rule the land and writers and artists are the main source of social idealism.)

If you know what happened to the Soviet economy you know the end of this story. The entire book presents itself as a kind of mathematical tragedy, the destruction of the idea of utopian abundance in a planned economy.

Straightedge and Compass

16th-century-compasses-BM-1344603001.jpg

John Donne’s A Valediction Forbidding Mourning ends with two lovers compared to the arms of a geometric compass over several stanzas:

Our two souls therefore, which are one, 

   Though I must go, endure not yet 

A breach, but an expansion, 

   Like gold to airy thinness beat. 




If they be two, they are two so 

   As stiff twin compasses are two; 

Thy soul, the fixed foot, makes no show 

   To move, but doth, if the other do. 




And though it in the center sit, 

   Yet when the other far doth roam, 

It leans and hearkens after it, 

   And grows erect, as that comes home. 




Such wilt thou be to me, who must, 

   Like th' other foot, obliquely run; 

Thy firmness makes my circle just, 

   And makes me end where I begun.

I came across this in Stephanie Burt’s book Don’t Read Poetry. She writes:

Each lover “leans and hearkens” after the other, as if Donne and his intimate friend, lover, or wife heard each other across the sea. The balanced eight-syllable lines, with their alternating rhymes, depend on each other too. Their closure seems “just” both mathematically and morally; in their mutual response, one or both of the lovers stands up, or becomes “erect” (yes it’s a penis joke).
If you yourself have ever felt unique or confused or confusing to others, especially in matters of the heart; if you have ever felt that your connection to somebody else–whether or not it is romantic, or exclusive, or recognized by the law–requires some explanation of deserves a passionate defense; if you have friends in a stubborn long-distance relationship; if you have been in any such situation, you might see Donne’s elaborate, challenging metaphors not as barriers to sincerity but as ways to achieve it, ways that take advantage of the tools–metaphor, indirection, complex syntax, rhythm–that we can find in poems. You might even, at least if you are looking for them, see in Donne’s great love poems, this one among them, defenses of what we now call queer relationships, relationships not sanctioned by custom or law, relationships most people in your own society can’t quite understand.

That image at the top, by the way, is a set of compasses held by the British Library from Donne’s time, the 16th century.

Mathematics that makes itself

Can something be true, just because you say it?

One example might be a promise. If you promise somebody that you’ll feed their cats…well, all of the sudden there is a promise there. The act of promising creates a promise. All of the sudden, there it is. It makes itself.

Anyway, maybe mathematics can sometimes pull off a trick like that. In 2003, MacKenzie and Millo argued that this is precisely what happened in financial markets with the Black-Scholes formula, a highly successful mathematical model used to find “correct” prices for a stock option:

Option pricing theory—a “crown jewel” of neoclassical economics—succeeded empirically not because it discovered preexisting price patterns but because markets changed in ways that made its assumptions more accurate and because the theory was used in arbitrage.

In other words, the use of the formula itself made the formula more reliable. It was a self-fulfilling mathematical model, a piece of mathematics that reshaped the world to conform to its assumptions. Wow.

(I found this interesting blog post that dives a bit deeper into the logic of a self-fulfilling equilibrium.)

If this feels eerie, it’s only because we’re forgetting how strange and self-referential the notion of predicting the markets really is: markets are hard to predict because they are predictions. This is a way that finance and economics is fundamentally unlike the natural sciences. In finance there is always the possibility that the scientist will influence the subject.

Black, Scholes, and Merton’s model did not describe an already existing world: when first formulated, its assumptions were quite unrealistic, and empirical prices differed systematically from the model. Gradually, though, the financial markets changed in a way that fitted the model. In part, this was the result of technological improvements to price dissemination and transaction processing. In part, it was the general liberalizing effect of free market economics. In part, however, it was the effect of option pricing theory itself. Pricing models came to shape the very way participants thought and talked about options, in particular via the key, entirely model‐dependent, notion of “implied volatility.” The use of the BSM model in arbitrage—particularly in “spreading”—had the effect of reducing discrepancies between empirical prices and the model, especially in the econometrically crucial matter of the flat‐line relationship between implied volatility and strike price.

To be clear, Ed Throp used option pricing to make a killing before the markets were influenced by Black-Scholes. So it’s not like the formula created its own reality entirely. The claim can only be one of degrees — that the model became more reliable, that the markets grew more like what the model predicted. I am unable to evaluate the evidence on its own and haven’t dived deeper into any of this literature but, huh, it makes you think doesn’t it?

It reminds me of Ben Blum-Smith’s excellent post about voting theory, where he suggests that mathematicians have at times gotten lost in their models and believed in them too strongly, more because of their mathematical properties than for any of their use in application. But what if — only at times, and only by degrees — your mathematical model could be its own fulfillment by changing the world to more closely accord to its predictions? Wouldn’t that be something.

On Benoit Mandelbrot’s “New Methods in Statistical Economics” [Part 1]

I’ve been reading Mandelbrot’s 1963 paper “New Methods in Statistical Economics.” It’s one of his many papers from the 1960s where he argues that people should pay more attention to non-Normal distributions (see my post). This is my attempt to explain the first two sections of his piece.

(My main purpose is to clarify things for myself here, so if you have questions or can explain issues with my exposition I would VERY VERY appreciate hearing from you!)

I come to the paper with two interests. First, Mandelbrot loudly argued that the price fluctuations of many commodities and securities is best thought of as nonGaussian. This is a paper where he makes that argument. That’s cool because finance is cool and important and interesting.

But the other reason is because just a few years after this, Mandelbrot became Mr. Fractal and published “How Long is the Coast of Britain?” This was the piece that popularized the notion of fractal dimension (something Mandelbrot calls a Trojan Horse for the way it represented a safe, neutral topic that served as a vector for his dimensional ideas).

But Mandelbrot also said that parts of this paper — which really seems to have nothing to do with fractals — represent the seeds of his geometric ideas. So, for example, he says this in the appendix of the reprinted edition of this 1963 paper:

The many footnotes in the original, except one, were easily integrated in the text. But Footnote 4 did not fit, and it cried out to be emphasized, because it was an early allusion to the theme of self-similarity that came to dominate my life and led to fractals. This footnote 4 read as follows:

“The various criteria of invariance used by physicists are somewhat different in principle from those I propose in economics. For example, the principle of relativity was not introduced to explain a complicated empirical relation, such as scaling. I am indebted to Harrison White for suggesting that I should stress the nuances between my methods and those of physics.”

So what does that have to do with fractals? It’s a subtle thing! Let’s dive in to the paper to see where he’s coming from. I’ll quote (sometimes at length) and then comment below the text.

Mandelbrot:

The approach I use to study the scaling distribution arose from physics. It occurred to me that, before attempting to explain an empirical regularity, it would be a good idea to make sure that this empirical identity is “robust” enough to be actually observed. In other words, one must first examine carefully the conditions under which empirical observation is actually practiced. The scholar observes in order to describe but the entrepreneur observes in order to act. Both know that most economic quantities can hardly ever be observed directly and are usually altered by manipulations.

Here is a thought experiment involving biology, not physics, but I think it illustrates the point. Suppose that you have a theory about trees: that the distribution of tree heights follows a nice bell curve. 

You are a lazy scientist that doesn’t want to measure anything, let’s say. Anyway, the world is large and you didn’t get funding so you can’t travel. Fortunately, a lot of other people have already done measuring! You discover this by googling.

But suddenly you run into a problem. Sure, some people have directly measured the heights of trees. But other people measured the total height of forests. That stinks! Sure, you can divide the forests by the number of trees to get an average tree height to put into your tree data. But that’s going to be messy.

So it’s a challenge to deal with data coming from many different sources. And if the data needed to explore your theory only could come from measuring every tree (and if you really can’t do that) then you probably should work on a different problem.

In most practical problems, very little can be done about this difficulty, and one must be content with whatever approximation of the desired data is available. But the analytical formulas that express economic relationships cannot generally be expected to remain unaffected when the data are distorted by the transformations to which we shall turn momentarily. As a result, a relationship will be discovered more rapidly, and established with greater precision, if it “happens” to be invariant with respect to certain observational transformations. A relationship that is noninvariant will be discovered later and remain less firmly established. Three transformations are fundamental to varying extents.

So in natural science, OK, it’s a problem. But in economic problems Mandelbrot is saying it is a MAJOR problem.

Suppose you think that stock prices tend to move along randomly, with the changes plucked from a Log-normal distribution, which looks like this:

LogNormal(median=3,stddev=2).png

OK, so you start looking for data. And some of your data comes from daily prices, some from weekly prices, other from yearly price variations. But there’s a problem: there’s no simple way to describe the relationship between the daily and the weekly data. You might want to simply add up a bunch of the daily data to compare it with the weekly data (or to take the weekly data and divide it by 5).

Well, you can’t. The sum of a bunch of log-normal distributions is not another log-normal distribution. So if your theory is true and the log-normal distribution is what guides the stock market’s random price changes, things are weird. If I understand correctly (not at all sure that I do) then there are two reasons why this is weird. First, it means that your attempt to compare different sources of data is likely to be a mess, as there is no easy way to compare and combine the different sources. Second…shouldn’t the daily and weekly prices show the same distribution? Wouldn’t it be weird if daily and weekly prices were governed by different distributions?

There is something exceptionally nice about the idea that small small variation is reproduced at higher scaled. People come in all shapes and sizes; the deepest levels of physical reality are governed by molecules randomly humming and bumping into each other. The idea that these scales are connected — that what we see is the cumulative result of the way the smallest things are — is highly attractive in both nature and economics. I think that this is a potential connection to the idea of fractals and self-similarity.

So, what are the ways that data might need to be stable when it’s transformed? Mandelbrot names three:

  1. Stable after being aggregated
  2. Stable after being mixed
  3. Stable when you only pay attention to the extremes

The first source of stability is the most important. Here’s an example Mandelbrot gives:

The distributions of aggregate incomes are better known than the distributions of each kind of income taken separately.

So if we are interested in the total distribution of income, we might be looking at a collection of different categories of income. Honestly, I can only guess what these categories might be. People sometimes talk about three forms of income (active, portfolio, passive) so maybe he means that? Or maybe he means that you have income distributions for each of the 50 US states, and you want to aggregate that into a national income distribution?

Anyway:

There is actually nothing new in my emphasis on invariance under aggregations. It is indeed well known that the sum of two independent Gaussian variables is itself Gaussian, which helps use Gaussian “error terms” in linear models. However, the common belief that only the Gaussian is invariant under aggregation is correct only if random variables with infinite population moments are excluded, which I shall not do (see Section V). Moreover, the Gaussian distribution is not invariant under our next two observational transformations.

This is indeed a very nice thing about the Gaussian distribution! You add them together, you get another one. Lots of little measurement mistakes add up to a Gaussian distribution of final measurements. (Lots of little differences at the cellular level lead to big differences in people.)

Mandelbrot is saying, you’d want this to remain true in your economic models. It would make the problems tractable to research; the prices and things really ought to add in this way too. And the vast majority of distributions (those “analytic equations” cited above) do NOT have this property. But Mandelbrot is going to argue in favor of an especially favorite family of distributions that do have this additive property — along with the invariance under the other two kinds of transformations.

These are the distributions that are called “Stable Distributions,” and include the Pareto distributions and the Gaussian distributions as members.

Levy_distributionPDF.png

More on the math of transforming various statistical distributions in Part 2.

The Efficient Market Hypothesis can deal with fat tails

Perhaps it’s just me that has been getting tangled up in my reading, but it seems to me that there’s confusion out there between two different assumptions underlying financial models.

The first assumption is that when prices of a stock (or whatever) change, they go up and down randomly, with the size and direction of the change plucked from a Gaussian distribution, i.e. a bell curve.

The second assumption is the Efficient Market Hypothesis — that there really isn’t any way to “beat” the market with expert knowledge or deep understanding of industry or the economy or whatever, because everybody knows all that stuff and it’s already incorporated into the price of the stock (or whatever).

Part of what’s confusing is that both of these positions are critiqued by many of the same people. Benoit Mandelbrot, for one. He spent the 1960s taking on the first assumption in a series of papers where he built the case that scientists were too much in love with the Gaussian distribution. He insisted that many things instead looked like their changes were plucked from Pareto distributions. These are fat-tailed compared to the bell curve — the things at the extreme are more likely to happen than the Gaussian model would predict.

Probability_density_function_of_Pareto_distribution.svg.png

This means that if you’ve built your theory on how to manage risk on a Gaussian model, you’re going to systematically underestimate the true risk. In particular, you won’t plan carefully enough for true disasters if you really start believing in your model.

(Of course, you might keep the Gaussian assumption for your metric while also keeping in mind your model’s limitation. Somehow, that doesn’t always seem to happen when these models are deployed in organizations. I mean I don’t have first hand knowledge of that previous sentence, but it sure seems that way from what I’ve read.)

But if the Gaussian models are wrong, does that implicate the Efficient Market Hypothesis? Some people write that way, but I don’t think it does. And the reason why is because the guy who came up with EMH — Eugene Fama — was a big hype man for Mandelbrot.

The Efficient Market Hypothesis appears in a big paper by Fama — “The Behavior of Stock-Market Prices.”  Here’s a note from the first page:

Many of the ideas in this paper arose out of the work of Benoit Mandelbrot oithe IBM Watson Research Center. I have profited not only from the written work of Dr. Mandelbrot but also from many invaluable discussion sessions.

He’s not kidding. A huge cornerstone of the paper is basically a retread of Mandelbrot’s work developing the theory of Pareto distributions, contra the Gaussian assumption. “The Gaussian hypothesis was not seriously questioned until recently when the work of Benoit Mandelbrot first began to appear,” he writes.

If I understand things correctly, the value of Mandelbrot’s work is that it allows Fama to claim that stock-market prices truly are a random walk. Whereas there are discrepancies if you assume this random walk is plucking change from a Gaussian distribution, Fama bolsters the “random walk” hypothesis using Mandelbrot’s work.

(By the way, a random walk using one of the distributions Mandelbrot supported is called a Levy Flight. Levy was Mandelbrot’s advisor. A great site to learn more about Levy Flights is this one — thanks to Mike Lawler for telling me about it. I tried to replicate a Levy Flight in P5 here, if you want to tinker with the code.)

So one of Mandelbrot’s main intellectual influences was on the originator of the Efficient Market Hypothesis. Clearly a belief that stocks are a random walk can coexist with a belief that the distribution guiding that walk is non-Gaussian.

Just as clearly, though, Mandelbrot would not agree with Fama, who represents a kind of establishment, conventional wisdom position in the financial world. There is a story here that I’m missing, one that I would very much like to know. Mandelbrot has a reputation as surly, with an enormous ego. (One joke goes that a ‘mandelbrot’ can be used as a unit of ego.) So when Mandelbrot critiques EMH and conventional finance models (like Markowitz’s theory of risk management) … there must be a story of falling out, tension with Fama. Right?

In any event, there is no necessary connection between those two assumptions, and in discussing the sort of thinking that results in financial catastrophe they really should be treated as two very separate things.

Q&A with Eugene Fama on the Bell Curve in finance, posted without commentary

It would be very enlightening if you would comment on the Nassim Nicholas Taleb (“The Black Swan”) attack on the use of Gaussian (normal bell curve) mathematics as the foundation of finance. As you may know, Taleb is a fan of Mandelbrot, whose mathematics account for fat tails. He argues that the bell curve doesn’t reflect reality. He is also quite critical of academics who teach modern portfolio theory because it is based on the assumption that returns are normally distributed. Doesn’t all this imply that academics should start doing reality-based research?

EFF: Half of my 1964 Ph.D. thesis is tests of market efficiency, and the other half is a detailed examination of the distribution of stock returns. Mandelbrot is right. The distribution is fat-tailed relative to the normal distribution. In other words, extreme returns occur much more often than would be expected if returns were normal.

There was lots of interest in this issue for about ten years. Then academics lost interest. The reason is that most of what we do in terms of portfolio theory and models of risk and expected return works for Mandelbrot’s stable distribution class, as well as for the normal distribution (which is in fact a member of the stable class). For passive investors, none of this matters, beyond being aware that outlier returns are more common than would be expected if return distributions were normal.

For other applications, however, the difference can be critical. Risk management by financial institutions is a good example. For example, portfolio insurance, which was the rage in the early 1980s, bombed in the crash of October 1987, because this was an event that was inconceivable in their normality based return model. The normality assumption is also likely to be a serious problem in various kinds of derivatives, where lots of the price is due to the probability of extreme events. For example, news story accounts suggest that AIG blew up because its risk model for credit default swaps did not properly account for outlier events.

KRF: I agree with Gene, but want to make another point that he is appropriately reluctant to make. Taleb is generally correct about the importance of outliers, but he gets carried away in his criticism of academic research. There are lots of academics who are well aware of this issue and consider it seriously when doing empirical research. Those of us who used Gene’s textbook in our first finance course have been concerned with this fat-tail problem our whole careers. Most of the empirical studies in finance use simple and robust techniques that do not make precise distributional assumptions, and Gene can take much of the credit for this as well, whether through his feedback in seminars, suggestions on written work, comments in referee reports, or the advice he has given his many Ph.D. students over the years.

The possibility of extreme outcomes is certainly important for things like risk management, option pricing, and many complicated “arbitrage” strategies. Investors should also recognize the potential effect of outliers when assessing the distribution of future returns on their portfolios. None of this implies, however, that the existence of outliers undermines modern portfolio theory or asset pricing theory. And the central implications of modern portfolio theory and asset pricing—the benefits of diversification and the trade-off between risk and return—remain valid under any reasonable distribution of returns.

Source

No, I don’t think the job market should decide whether or not we teach math.

I understand where the confusion is coming from, but I don’t think school should just reflect job market needs.

I also don’t think that science is a more meaningful context for math.

I think pure math can be meaningful, and it’s easier to apply the math to new situations if it’s mastered in a more abstract, contextless form. (Of course, that abstraction needs to be meaningful to students.)

What I do believe is that high school math goes beyond the mathematics that is meaningful and broadly useful for everybody to learn. With the exception of something like exponential functions (which in the US appears in Algebra 1), I think we’re requiring too much. And there are two — well, three — reasons that you’d require more math than everybody needs.

First, because the content you’re requiring is awesome and deep and wonderful.

Second, because it keeps open career pathways for more students who would not elect to take the courses if they were optional.

While math is wonderful, I don’t think high school students are having awesome, deep, and wonderful experiences in their coursework. I don’t actually think this is intrinsic to the content of algebra — I’m a high school math teacher, after all! I think it’s more that we’re requiring too much learning to happen and all previous learning is prerequisite; if you’re behind by 9th Grade, it usually gets worse.

So let’s cut the math! Let’s change the curriculum! Yes, but then will students still have every opportunity open at the end of high school?

Really, the idea that you can keep every option open to every student the way we currently school our children is a myth. But it’s one that is widely held and makes it hard to make any change in the curriculum — learning about rearranging rational expressions is a universal right! You hear these things.

That was when I said to myself, OK, let’s keep those options open for kids. Instead of requiring math, let’s require the subjects that we are supposedly preparing them for careers in. Let them teach the math that you need for those subjects as units in those courses, or even as the first half of the year if that’s what’s best. True, science teachers these days don’t teach math very well…but they also aren’t charged with it. I’m sure they could figure it out as well as high school math teachers can.

Why would this be a better experience for students? Well, maybe we could pare down the curriculum while simultaneously keeping those pathways open. And while math is beautiful and wonderful, so is science. And while students certainly deserve the opportunity to experience the kind of thinking that makes mathematics unique…come on, they’ve already got 8 or 9 years of school to study pure math. At some point you have to stop requiring it!

All these speculative ideas are tough to judge because they’re all fantasy. Which fantasy is more realistic? I don’t know. Maybe it’s better to fantasize that we’ll replace Algebra 2 with Graph Theory, or that we’ll drastically cut the algebra requirements while beefing up support systems so that every student can success in the algebra sequence. Maybe we should require coding or statistics courses.

(Honestly, requiring statistics or coding as math classes might be a fantastic compromise in our current system, a little bit less fantastical. For me the key question is how dependent success in a course is on success in previous coursework. Students, especially in high school, would benefit from more courses that represent something like a fresh start. Otherwise the failures just compound.)

But, probably, none of this will happen, and the reason is because the real role that math is playing in high school isn’t about the importance of the subject as a humanity or its value on the job market — it’s a third reason: sorting students and signalling their academic potential to universities. And that should be a major concern for mathematicians and math educators. While I do love math and would love to share more of what I love with students, if cutting math meant I could lower the stakes for math students in high school in a significant way, I would do it with no hesitation.

Not that the plan I articulating in a few vague sentences is anything like a solution. Other people have other ideas. But I hope we can get on the same page about what the problems are.

The Mathematical Modelers’ Hippocratic Oath

The Financial Modelers’ Manifesto was a proposal for more responsibility in risk management and quantitative finance written by financial engineers Emanuel Derman and Paul Wilmott. The manifesto includes a Modelers’ Hippocratic Oath.

The Modelers’ Hippocratic Oath

I will remember that I didn’t make the world, and it doesn’t satisfy my equations.

Though I will use models boldly to estimate value, I will not be overly impressed by mathematics.

I will never sacrifice reality for elegance without explaining why I have done so.

Nor will I give the people who use my model false comfort about its accuracy. Instead, I will make explicit its assumptions and oversights.

I understand that my work may have enormous effects on society and the economy, many of them beyond my comprehension