Equations and Equivalence in 3rd Grade

So I was stupidly mouthing off online to some incredibly serious researchers about equivalence and the equals sign and how it’s not that hard of a topic to teach when — OOPS! — my actual teaching got in the way.

I had done the right thing. In my 3rd Grade class I wanted to introduce “?” as a symbol for an unknown so I put up some equations on the board:

15 = ? x 5

3 + ? = 10

10 + 3 = 11 + ?

And I was neither shocked, nor did I blink, when a kid told me that the last equation didn’t make any sense. Ah, I thought, time to nip this in the bud.

I listened to the child and said I understood, but that I would like to share how it does make sense. I asked whether anyone knew what the equals sign meant, and one kid says “makes” and the next said “the same as.” Wonderful, I said, because that last equation is just saying the left side equals the same as the right side. So what number would make them the same? 2? Fantastic, let’s move on.

Then, the next day, I put a problem on the board:

5+ 10 = ___ + 5

And you know what comes next, right? Consensus around the room is that the blank is 15. “But didn’t we say yesterday that the equals sign means ‘the same as’?” I asked. A kid raised her hand and explained that it did mean that, but the answer should still be 15. Here’s how she wanted us to read the equation, as a run on:

(5 + 10 =  15 ) + 5

Two things were now clear to me. First, that my pride in having clearly and decisively taken care of this issue was misguided. I needed to do more and dig into this more deeply.

The second thing is that isn’t this interesting? You can have an entirely correct understanding of the equals sign and still make the same “classic” mistakes interpreting an actual equation.

I think this helps clear up some things that I was muddling in my head. When people talk about the need for kids to have a strong understanding of equivalence they really are talking about quite a few different things. Here are the two that came up above:

  • The particular meaning of the equals sign (and this is supposed to entail that an equation can be written left-to-right or right-to-left, i.e. it’s symmetric)
  • The conventional ways of writing equations (e.g. no run ons, can include multiple operations and terms on each side)

But then this is just the beginning, because frequently people talk about a bunch of other things when talking about ‘equivalence.’ Here are just a few:

  • You can do the same operations to each side (famously useful for solving equations)
  • You can manipulate like terms on one side of an equation to create a true equation (10 + 5 can be turned into 9 + 6 can be turned into 8 + 7; 8 x 7 can be turned into 4 x 14; 3(x + 4) can be turned into 3x + 12, etc.)

When a kid can’t solve 5 + 10 = ___ + 9 correctly or easily using “relational understanding,” this is frequently blamed on a kid’s understanding of the equals sign, equivalence or the particular ways of relating 5 + 10 to __ + 9. But now I’m seeing clearly that these are separate things, and some tend to be easier for kids than others.

So, this brings us to the follow-up lesson with my 3rd Graders.

I started as I usually do in this situation, by avoiding the equals sign. I find that a double arrow serves this purpose well, so I put up an arrow relationship on the board:

2 x 6 <–> 8 + 4

I pointed out that 2 x 6 makes 12 and so does 8 + 4. Could the kids come up with other things like this, I asked?

They did. I didn’t grab a picture, but I was grateful that all sorts of things came up. Kids were mixing operations nicely, like 12 – 2 <–> 5 + 5, in general it felt like this was not hard, kids knew exactly what I meant and could generate lots of ideas.

My next move was to pause and introduce the equals sign into this conversation. Would anyone mind if I replaced that double arrow with an equals sign? This is just what the equal sign means, anyway. No problem, that went fine also.

Kids were even introducing great examples like 1 x 2 = 2 x 1, or 12 = 12. Wonderful.

Then, I introduced the task of the day, in the style of Open Middle (R) (TM) (C):

IMG_3548.jpg

Yeah, I quickly handwrote it with a sharpie. It was that sort of day.

I carefully explained the constraints. 10 – 2 + 7 + 1 was a true equation, but wouldn’t work for this puzzle. Neither would 15 – 5 = 6 + 4. And then I gave the kids time to search for solutions, as many as they could find.

Bla bla, most kids were successful, others had trouble getting started but everyone eventually had some success. Here are some pictures of students who make me look good:

IMG_3554.jpg

IMG_3552-1961530330-1569718362990.jpg

Here is a picture of a student who struggled, but eventually found a solution:

IMG_3549.jpg

Here is a picture of the student from the class I was most concerned with. You can see the marks along his page as he tries to handle things like 12 – 9 as he tries subtracting different numbers from 12. I think there might have been some multiplying happening on the right side, not sure why. Anyway:

IMG_3553.jpg

The thing is that just the day before, this last student had almost broken down in frustration over his inability to make sense of these “unconventional” equations. So this makes me look kind of great — I did it! I taught him equivalence, in roughly a day. Tada.

But I don’t think that this is what’s going on. The notion of two different things being equal, that was not hard for him. In fact I don’t think that notion is difficult for very many students at all — kids know that different additions equal 10. And it was not especially difficult for this kid to merge that notion of equivalence with the equals sign. Like, no, he did not think that this was what the equals sign meant, but whatever, that was just on the basis of what he had thought before. It’s just a convention. I told him the equals sign meant something else, OK, sure. Not so bad either.

The part that was very difficult for this student, however, was subtracting stuff from 12.

Now this is what I think people are talking about when they talk about “relational understanding.” It’s true — I really wish this student knew that 10 + 2 <–> 9 + 3, and so when he saw 12 he could associate that with 10 + 2 and therefore quickly move to 9 + 3 and realize that 12 – 3 = 9. I mean, that’s what a lot of my 3rd Graders do, in not so many words. That is very useful.

So to wrap things up here are some questions and some provisional answers:

Q: Is it hard to teach or learn the concept of equivalence.

A: No.

Q: Is it hard to teach the equals sign and its meaning?

A: It’s harder, but this is all conventional. If you introduce a new symbol like “<–>” I don’t think kids trip up as much. They sometimes have to unlearn what they’ve inferred from prior experiences that were too limited (i.e. always putting the result on the right side). So you’re not doing kids any favors by doing that, it’s good to put the equations in a lot of different forms, pretty much as soon as kids see equations from the first time in K or 1st Grade. I mean why not?

Q: If kids don’t learn how equations conventionally work will that trip them up later in algebra?

A: Yes. But all of my kids find adding and subtracting itself to be more difficult than understanding these conventions. My sense is that you don’t need years to get used to how equations work. You need, like, an hour or two to introduce it.

Q: Does this stuff need to be taught early? Is algebra too late to learn how equations work?

A: I think kids should learn it early, but it’s not too late AT ALL if they don’t.

I have taught algebra classes in 8th and 9th Grade where students have been confused about how equations work. My memories are that this was annoying because I realized too late what was going on and had to backtrack. But based on teaching this to younger kids, I can’t imagine that it’s too late to teach it to older students.

I guess it could be possible that over the years it gets harder to shake students out of their more limited understanding of equations because they reinforce their theory about equations and the equals symbol. I don’t know.

I see no reason not to teach this early, but I think it’s important to keep in mind that in middle school we tell kids that sometimes subtracting a number makes it bigger and that negative exponents exist. Kids can learn new things in later years too.

Q: So what makes it so hard for young kids to handle equations like 5 + 10 = 6 + __?

A: It’s definitely true that kids who don’t understand how to read this sort of equation will be unable to engage at all. But the relational thinking itself is the hardest part to teach and learn, it seems to me.

Here is a thought experiment. What if you had a school or curriculum that only used equal signs and equations in the boring, limited way of “5 + 10 = ?” and “6 x ? = 12” throughout school, but at the same time taught relational thinking using <–> and other terminology in a deep and effective way? And then in 8th Grade they have a few lessons teaching the “new” way of making sense of the equals sign? Would that be a big deal? I don’t know, I don’t think so.

Q: There is evidence that suggests learning various of the above things helps kids succeed more in later algebra. Your thoughts?

A: I don’t know! It seems to me that if something makes a difference for later algebra, it has to be either the concept of equivalence, the conventions of equations, or relational thinking.

I think the concept of equivalence is something every kid knows. The conventions of equations aren’t that hard to learn, I think, but they really only do make sense if you connect equations to the concept of equivalence. The concept of equivalence explains why equations have certain conventions. So I get why those two go together. But could that be enough to help students with later algebra experiences? Maybe. Is it because algebra teachers aren’t teaching the conventions of equations in their classes? Would there still be an advantage from early equation experience if algebra teachers taught it?

In the end, it doesn’t matter much because young kids can learn it and so why bother not teaching it to them? Can’t hurt, only costs you an hour or two.

But the big other thing is relational thinking. Now there is no reason I think why relational thinking has to take place in the context of equations. You COULD use other symbols like double arrows or whatever. But math already has this symbol for equivalence, so you might as well teach relational thinking about addition/subtraction/multiplication/division in the context of equations. And that’s some really tricky, really important mathematics to learn. A kid being able to understand that 2 x 14 is equal to 4 x 7 is important stuff.

It’s important for so many reasons, for practically every reason that arithmetic is the foundation of algebra. I can’t list them now — but it goes beyond equations, is my point. Relational thinking (e.g. how various additions relate to each other) is huge and hugely important.

Would understanding the conventions of the equals sign and equations make a difference in the absence of experiences that help kids gain relational understanding? Do some kids start making connections on their own when they learn ways of writing equations? Does relational understanding instruction simply fail because kids don’t understand what the equations their teachers are using mean?

I don’t know.

High, Holy Days: A Playlist

A lot of you have been asking where my Elul/Rosh HaShana/Aseret Yemei Teshuva/Yom Kippur playlist is. “Is it ready yet?” people ask. “You promised.”

Well, it’s not quite done. I’m still tinkering with it. But it’s as ready as it’s ever going to be. Here it is, on Spotify.

Screenshot 2019-09-24 at 7.54.30 PM

Screenshot 2019-09-24 at 7.54.42 PM

You want me to what? Explain it? That defeats the whole point of a playlist. It would be reductive to go song by song and explain its presence and purpose. I mean, seriously.

Still, there is what to say.

We open as the month of Elul does, with the arrival of the Infanta heralded by the shofar. The Queen is in the field, Elul is in the sky, and the question is what you’re going to do about it. Mad Men, indeed.

It’s time to start asking the big questions. Turn off your mind, relax — but not too much. You need to rethink things, to pay attention. It is not dying, but it’s not not dying either. Because there are certain things to keep in mind when you hear the shofar. Everybody here is a cloud. Don’t forget. If I’m alive, next year.

Sinnerman, Troubleman, Man, it doesn’t matter what you’re called. It matters what you are.

Here’s the deal about “Who By Fire?”: I don’t like any of the versions on Spotify. This is one of those times when a song has a single correct version, and it’s the version with the saxophone.

The whole thing doesn’t work with that Mediterranean guitar intro/accompaniment nearly as well. I think it’s just a fundamental difference between guitar and sax. Sax is a horn, guitar is a string instrument. Your guitar can do a lot of things (e.g. it can weep) but it’s not powered by breath and it never will be. To put it another way, guitar is your siddur but sax is your shofar.

This is the version that should be on the playlist. If it were a mixtape I’d have ripped it, etc.

After this you get to go down to the river for three songs. This is tashlich, but it can also be the mikveh before Yom Kippur if you want to keep things moving roughly chronologically. (I originally tried to make this thing match the chronology more closely. It was a mess.)

From “Get By”:

This morning, I woke up
Feeling brand new and I jumped up
Feeling my highs, and my lows
In my soul, and my goals
Just to stop smokin’, and stop drinkin’
And I’ve been thinkin’ – I’ve got my reasons
Just to get by

And you get one last wordless prayer. Then, the whole thing ends, and we’re on to a new year. Will there be feasting and dancing in Jerusalem this year?

But pat yourself on the back for a second — you have made it to a new year, this year.

May you be written in the book of life. Ketivah v’chatimah tovah.

Dear Aunt Sally: The exit tickets are all over the place!

Dear Aunt Sally,

My students had a hard time with my lesson on solving equations. How do I know? I gave a short “exit ticket” and the results were…mixed. There were three questions:

  • 3x + 5x = 56
  • 3x + 7 – 5x = 45
  • 6x – 5 + 11x + 17 = 63.

The main issue is combining like terms. (I’ve included some pictures of student work for you, Aunt Sally.)

ED5tWAnXYAAr4r1

ED5uVjiXYAAYD4N.jpeg

That said, some students in each class definitely did understand the material.

ED5t4xMXsAQY2qI.jpeg

One more issue. In most of my classes, students didn’t struggle with the first problem. But in each of my classes, some students did, and in one of my classes very few students got that first one correct either. 

What do I do to help the students who didn’t understand the material, and what do I do with the kids who did?

Ryan in Florida

***

Dear Ryan,

You have described one of the perpetual struggles of teaching. How do I balance the needs of the many versus those of the few? I suppose back in Neanderthal times the fellow charged with teaching youth to spear a mammoth came home to the cave feeling similarly. “Some of those kids get it,” he’d say. “But what about the one who used the flat end of the spear? He’s going to starve to death. He would really benefit from getting back to the basics. Maybe I’ll split the group in half?”

Thank goodness we aren’t Neanderthals. We are modern teachers! This means we have access to the wonders of modern technology. I of course mean the blackboard and copy machine.

My favorite way to follow-up on these sorts of quizzes is with examples, so I put together three such activities for you:

Screenshot 2019-09-08 at 7.45.13 AM

Screenshot 2019-09-08 at 7.45.22 AMScreenshot 2019-09-08 at 7.45.29 AM

And then some mixed-up practice, to take another step in the right direction.Screenshot 2019-09-08 at 7.46.00 AM

A good example activity, in my experience, can seem so simple so as to hide its design. And in fact the actual design of the student work was mostly straightforward. The choices to make are mostly ones I made long ago — to prefer simplicity, to subtly use arrows and lines, and to make the work as much like a student’s as possible while making the work as clear as possible to read.

(The mistake, in particular, closely follows the design of an Algebra by Example mistake.)

Most of the work here is in narrowing in on a specific type of problem that is worth including in the examples. (The simplicity of the format works well with the more complex task of narrowing in.)

Here is a rule I tell myself while looking at a student’s mistake: Every mistake points to a family of situations very similar to the mistake that the student doesn’t yet know how to handle. Find that family, and teach it!

In this case I don’t assume the mistakes on the second problem (3x + 7 – 5x = 45) point not to issues “combining like terms” in general. I assume instead that the mistake points to issues where the like terms are separated visually in the equation and where subtraction is involved. Those things are distinctive and make the equation more difficult to solve correctly — that is the family that I focus in on for the example.

How the examples are used is your decision, Ryan, but here is how I would do it. Begin class by showing just the example, and ask students to silently study it. (They can let you know when they’ve read it all with their thumbs.) When finished, you can ask them to answer the explanation questions on their own or with a partner. If you want to interject with an explanation, by all means, but often students are ready to dive straight into the practice problems.

There are three examples/non-examples I’ve made. Use as many as your class would benefit from.

Then, there are four practice problems. Those are important because they are mixed practice. If we see each of these examples as pointing to a micro-skill, a small family of problems within the broad category of “two-step equations,” then these problems are interleaved in the practice set. That can be useful! Your students will have to think back and remember what they did with the examples.

Teachers of equation-solving know that there are always more mistakes that need to be addressed. The difference between good and bad teaching of this topic, as I see it, is whether the teacher can get specific and point to precisely what is hard for their students. Subtracting, different visual formats, handling a variable with coefficient 1 (x as 1x), and so many other little things — there will always be more of these. Some teachers just repeat and repeat and repeat without getting any more specific. Instruction should get more specific as practice gets more general.

So, keep at it! Pretty soon your young charges will be scattering the landscape with Woolly Mammoth carcasses, so to speak.

-Aunt Sally

A Syntax of Geometrical Figures in “De Aetatibus Mundi Imagines”

pic3.jpg

Holanda represents the intelligible reality of the Holy Trinity through a “hypothetical” syntax of geometrical figures:

“Starting from a perfect circle, three triangles merge in the abyss, provoking a strange sensation as much of movement as of immobility. Alpha and Omega are inscribed on the first equilateral triangle, perfectly inscribed in the circle.”

Spectacle and Topophilia by David R. Castillo

Bonus images:

pic1pic2

 

 

 

Introducing Stable Distributions

The story so far:

  • There are lots of ways to put two functions together and get a new one out of the process.
  • Addition is one of these ways. Multiplication is another.
  • If you add two Gaussian functions together, you don’t get another Gaussian function.
  • If you multiply two Gaussian functions together, guess what? You get another Gaussian function.
  • Convolution is another way of combining two functions.
  • If you convolve two Gaussian functions together, guess what? You get another Gaussian function.

In the last post I tried to explain why convolving two probability distributions produces the distribution of the sum of those variables. And that sum is guaranteed to also be a bell curve, as long as the distributions its made of come are distributed normally as well.

It’s worth stewing for a moment on what that means, because a lot of things that we care about can be thought of as sums of random variables.

One example is height. Take, for example, the height of a forest. I know nothing about the biology of forest height, but I did find a few figures online. (Yes, random searching.)

Here they are.

forests-06-03899-g008.png

Some of these plots look normal. Then again, some of them don’t. I don’t really care! Let’s pretend that forest height truly is normally distributed. Why would that be?

The thing is that there are a lot of factors that go into how tall a forest is height. There is underlying genetic variation in the families of trees that make a forest. There is underlying randomness is the environmental conditions where a forest grows, and “environmental conditions” is itself not a single factor but another collection of random variables — rain, soil, etc.

If we were going to make a list of factors that go into forest height how long do you think it would be before we had an exhaustive list? Hundreds of factors? Thousands?

Given all this, “forest height” really isn’t a single random variable. It’s a collection of random variables, not the way we think of a single coin flip.

(AND IS A COIN FLIP EVEN REALLY A SINGLE RANDOM EVENT??? OOOOOOOH DID I BLOW YOUR MIND?)

This all feels very complicated. How is it that a forest’s height has such a lovely normal distribution?

Given the math in the previous post we have a very tidy answer: forest height is thought of as a sum of random variables. Even if there are hundreds or thousands of underlying random variables that sum up to forest height, as long as all (most?) of them are themselves Gaussian, so will their sum, represented by the convolution of all those thousands of distributions.

(Note: we did not prove that convolution preserves the Gaussian nature of a distribution for an arbitrary number of distributions. But if you take two of those distributions they make a new Gaussian, and then pair that with another distribution, and then another, etc., so on, it’s a pretty direct inductive argument that you can convolve as many of these as you’d like and still get a Gaussian at the end of things.)

So convolving is nice because it gives us a tidy way to think of why these incredibly complex things like “forest height” could have simple distributions.

But there’s only one problem: who says that you’re starting with a Gaussian distribution?

Convolution does not always preserve the nature of a function. To pick an example solely based on how easy it was for me to calculate, start with a very simple function:

f(x) = 2x

Define this only on the region [0,1] so that it really can be thought of as a probability distribution. Then, convolve it with itself.

Screenshot 2019-09-03 at 10.14.15 PM.png

(Hey, why is this only the first half of the convolution? Because for the life of me I can’t figure out how to visualize the second half of this integral while limiting the domain of the original function. Please, check my work, I am actually pretty sure that I’m exposing an error in my thinking in this post but hopefully someone will help me out!)

In any event, what you get as a result of this convolution is certainly not linear. So linearity of a distribution is not preserved by convolution.

What this means is that whether convolution can explain the distribution of complicated random variables that are the sum of simpler random things really depends on what the distribution we’re dealing with is. We are lucky if the distribution is Gaussian, because then our tidy convolution explanation works. But if the distribution of the complicated random variable was linear (what sort of thing even could) then this would present a mystery to us.

But is it just Gaussian distributions that preserve their nature when summed? If so, this would be pretty limiting, because certainly not every distribution that naturally occurs is a normal distribution!

The answer is that it’s not just Gaussian distributions. There is a family of distributions that is closed under convolution, and they are called the stable distributions.

***

I started trying to understand all of this back when I tried to read a paper of Mandelbrot’s in the spring. Getting closer!

This post doesn’t follow any particular presentation of the ideas, but a few days ago I read the second chapter of Probability Tales and enjoyed it tremendously. There are some parts I didn’t understand but I’m excited to read more.

Another way to smoosh two Bell Curves together

Previously, we looked at one way of combining two Bell Curve (i.e. Gaussian distributions) together to make a third — multiplication.

There are other ways to do this, though. The best known (and as far as I can tell, the most important) is convolution. So, here are two Gaussian distributions, and what you get when you convolute(?) them:

Screenshot 2019-08-25 at 1.28.42 PM

Screenshot 2019-08-25 at 1.28.55 PM

Formally, this process takes two functions — f(x), g(x) — and then produces a new distribution defined in the following way:

(f * g)(t) = \int_{-\infty}^{+\infty}f(x)g(t-x)dx

I have been struggling with this idea for several months, but just a few days ago I made some progress.

Convolution is often described as a blurring process — it blurs one distribution according to another one. That’s how Terry Tao describes it in this Math Overflow post:

If one thinks of functions as fuzzy versions of points, then convolution is the fuzzy version of addition (or sometimes multiplication, depending on the context). The probabilistic interpretation is one example of this (where the fuzz is a a probability distribution), but one can also have signed, complex-valued, or vector-valued fuzz, of course.

I have a hard time seeing the “blurring” in the images above. To really see it, I have to change the initial functions. For example, consider the convolution of a Gaussian and linear function (with restricted domain). Before convolution…

Screenshot 2019-08-25 at 1.46.17 PM

…and after.

Screenshot 2019-08-25 at 1.46.27 PM

Just playing around with the calculator a bit more, here is another before/after pair.

Screenshot 2019-08-25 at 2.05.54 PM

Screenshot 2019-08-25 at 2.06.14 PM.png

I first learned about convolution a few months ago, and it was explained to me in terms of blurring. The thing about this “intuition building” metaphor is that I’ve been sitting with it since then, and it hasn’t helped me feel comfortable with it at all. It was only last week when I came across the far more prosaic meaning of convolution that things started to click for me. Because besides for whatever blurring convolution represents, it also represents the sum of two independent random things.

(What follows is lifted from this excellent text.)

Suppose you have two dice, both six-sided, both fair. There is an equal chance of rolling 1, 2, 3, 4, 5 or 6 with each die — a uniform probability distribution. P(x) = \frac{1}{6} whether x = 1, 2, 3, 4, 5, 6.

What would the distribution of the sums of the rolls look like?

The calculations start relatively simply, but check out the structure. For example, this is the calculation we have to do to find the chances of rolling a a 3:

P(1)P(2) + P(2)P(1)

The chances of rolling a 4:

P(1)P(3) + P(2)P(2) + P(3)P(1)

The chances of rolling a 5:

P(1)P(5-1) + P(2)P(5-2) + P(3)P(5-3) + P(4)P(5-4)

The chances of rolling n:

P(1)P(n - 1) + P(2)P(n - 2) + ... + P(n-1)P(n - (n- 1))

So! Using the language of summation, we can summarize this process as so:

P(sum = n) = \sum P(k) \cdot P(n - k)

We might as well take the last step of calling this process “convolution,” because it’s just the discrete version of the integral from above!:

(f * g)(t) = \int_{-\infty}^{+\infty}f(x)g(t-x)dx

***

Mathematically, there is a lot of interesting stuff to continue exploring with convolutions. Not all convolutions are defined, there’s a connection to Fourier transforms (another thing I don’t understand yet), there are discrete problems to solve in the text (what about different dice?), and so on and so on.

Briefly, though: why didn’t the blurring metaphor help me? I don’t think it’s such a mystery. It’s because while blurring is easy to understand, that image was totally disconnected from the underlying calculation. Why should that complicated integral be related to the process of blurring?

Now, I have a better understanding. (Blurring X and Y sort of is like finding the probability of X + Y.)

In school math, a lot of teachers bemoan their students’ lack of conceptual understanding, and it’s generally felt that procedural understanding is more obtainable. In my life as a learner of mathematics I usually feel it’s the other way around. When I read articles in Quanta Magazine or books about mathematics I haven’t yet studied I can usually follow the exposition but am left feeling a bit empty. Yes, I can follow the metaphors (“Imagine a number as a little bird; those birds fly together in flocks; but what happens when a bird has children? do they rejoin the flock? where? etc.”) but what have I learned?

Popular exposition of mathematics is maybe more difficult than exposition of other subjects simply because of the necessity of some kind of metaphor that brings the abstract to life. What results is a kind of understanding, but something far from the whole thing, and the best mathematical exposition also leaves me feeling jealous of those who can reach past the metaphors and grasp the thing itself.

That’s not to say that math exposition for popular audiences isn’t valuable — it is! Most people aren’t ever going to reach that deeper, unified understanding. I certainly won’t, most of the time! But for convolution, I feel a step closer.

Gaussians are making Gaussians

Let f and g be Gaussian distributions.

Screenshot 2019-08-23 at 12.01.12 PM.png

Go ahead, add them. You don’t get another Gaussian distribution.

Screenshot 2019-08-23 at 12.04.42 PM.png

Well, of course not. They don’t have the same mean. So set the means equal.

Screenshot 2019-08-23 at 12.06.48 PM.png

That’s no better. The sum of f and g is still very much not Gaussian.

Screenshot 2019-08-23 at 12.07.58 PM.png

So, that’s no good. But of course it failed — just look at those visuals!

What about multiplication? Here’s what the product of two Gaussian distributions with equal means looks like.

Screenshot 2019-08-23 at 12.10.36 PM.png

That looks much better!

In fact this is true: the product of two Gaussians distributions remains a Gaussian function. The only proofs I know of dive into some algebra — I like this one — but the core idea is that multiplying exponents is additive. That’s what keeps it all in the Gaussian family.

So consider two Gaussian functions, one with a mean \mu and the other with a mean at 0 (for a touch of simplicity):

f(x) = \frac{1}{\sqrt{2\pi}\sigma_f} e^{\frac{x^2}{2\sigma^2_f}}

g(x) = \frac{1}{\sqrt{2\pi}\sigma_g}  e^{\frac{(x -\mu)^2}{2\sigma^2_g}}

Their product will look like this:

f(x)g(x) = \frac{1}{2\pi\sigma_f\sigma_g} e^{\frac{x^2}{2\sigma^2_f}+\frac{(x-\mu)^2}{2\sigma^2_g}}

Making common denominators and adding through:

f(x)g(x) = \frac{1}{2\pi\sigma_f\sigma_g} e^{\frac{\sigma^2_g x^2 +\sigma^2_f(x -\mu)^2}{2\sigma^2_f\sigma^2_g}}

Might as well expand that exponent a bit and summarize:

f(x)g(x) = \frac{1}{2\pi\sigma_f\sigma_g} e^{\frac{(\sigma^2_f +\sigma^2_g) x^2 -2\sigma^2_f x \mu + \sigma^2_f \mu^2}{2\sigma^2_f\sigma^2_g}}

And then you can divide the numerator and denominator by (\sigma^2_f +\sigma^2_g) and you’ll end up with a quadratic trinomial in x. You can always express that quadratic trinomial as (x - M)^2 somehow or another.

(Brief but important nit-picky note: that would make the product of two Gaussian function, but the scale factor on the left side of the expression is off, so it’s not a Gaussian distribution. You’d have to scale the product of two Gaussian distribution in order to get another Gaussian distribution.)

Is this useful? Is this significant, in some way? I don’t know. Apparently it’s useful in applying Bayes’ Theorem, but I know nothing about that.

One thing I do know is that it makes for some fun visuals.

Screenshot 2019-08-23 at 1.09.12 PM.png

Sure, education is weird. But it’s not THAT weird.

(What follows is a rant. I have moderate confidence about my impressions, but I have made no attempt to test or challenge them. I mostly wrote it because it was fun to write, and I think it’s true.)

There are a lot of people who will tell you that education has a particularly awful culture, but to prove it they’ll point to things that are totally common in other industries. This amounts to a weird kind of exceptionalism about education — a belief that our culture is uniquely troubled.

An example is those godawful edu-celebrities. They get these enormous (and enormously lucrative) platforms and use them to spout a mix of meaningless platitudes and outright lies: Message of the day — Good teachers enter the classroom; Great teachers leave it. (I’m no good at making these up, but @EduCelebrity is fantastic at it.)

The thought goes like this: isn’t this a sign of the particularly awful culture of education? Isn’t the ascent of these blablas only possible because…and now you get to pick whatever it is you feel is uniquely wrong with education. We would never treat (and now pick some other job)(probably doctors or lawyers) like that!

But this misses the point entirely which is that these bozos are coming from the business world into education.

This would be a particularly good moment for me to pivot to stories from my time working in industry buuuuut my only grownup work experience is in teaching. So let me speak as a studious observer of the “Best Seller” table in bookstores and say that all this leadership and self-improvement stuff is coming to us directly from business and management. Edu leadership-speak is derivative of business-speak.

(If I have had no direct encounters with business culture, I have even less experience with the self-help world. As a NYC Jew let me say that it sure seems like the self-help world is rooted in evangelical church culture. That is an observation worth very little, obviously.)

But what about the garbage tech that suffuses education? I don’t know if you’ve noticed but the rest of the world is not exactly untouched by garbage tech.

OK, OK, but what about our particularly dysfunctional relationship between research and teaching? Can you imagine a group of doctors who routinely rejected medical research? That would be malpractice!

Two responses. First, yes, teachers ignore research. This is a whole thing, and it does seem possible to me that the research/practice gap is particularly wide in teaching. (For a lot of reasons.)

But also, the idea that medicine has a particularly simple relationship between research and practice ignores the gap between medical researchers and medical practitioners. And there is such a gap! I’m not saying that it’s as significant as the research gap in education — it’s not — but it’s there, and for many of the familiar reasons: not all research is relevant to practice, practitioners have values that are sometimes in tension with research, the profit motive pollutes the information environment, institutions have needs that aren’t identical to those of individuals.

Why should you believe me about this? Again, this would be a useful time to point to some particular expertise I have with medicine but, again, no. Although I’m not entirely making things up. First, there are many pieces one can read about the gap between evidence-based medicine and what happens in practice (such as this).

Second, I’ve asked doctor friends about this. In particular, I asked doctor friends how widespread anxiety about the research/practice gap is in medicine. And the answer I got was that it’s not nearly as pervasive as it is in education — not at all — but it’s there. Probably it depends on the specialty.

This weakens my point a bit. There clearly is a difference between education and other fields. But it’s not that different.

I understand that this is the loosest category of argument. I’m making an argument pertaining to quantity; but that’s just what I’m doing. Education is not SO special, it’s not THAT different, it’s LESS plagued by a uniquely bad culture than many people inside education think it is.

So, why is it that people frequently lament education’s particularly awful culture?

It would be tempting to play the exact same game I just finishing critiquing. In no other field do people say their field’s problems are unique! Education is the only profession that laments how unusually problematic it is! Do DOCTORS spend all day talking about how flawed their profession is?

But you know what? Everybody spends time complaining about their work. It’s your god-given right. Go ahead, exercise it! But in moderation.

Reading Research: A Randomized Controlled Trial of Interleaved Mathematics Practice

I. 

The study is called “A Randomized Controlled Trial of Interleaved Mathematics Practice” and that’s exactly what it is. It’s one of the most readable studies I’ve read in a long time — the writing is crisp and there is a minimum of technical concepts. Not all papers do a great job bringing up potential concerns or counterarguments, but this one definitely does.

The short version of the study is that interleaving practice — basically, every problem is of a different type than its neighbor — was very effective at helping kids do well on a test. This is a trendy thing in teaching, but probably for a good reason. First (and most importantly) it seems to work. Second (and still importantly) it’s entirely uncontroversial; nobody seems particularly committed to the status quo of blocked (i.e. repetitive) practice. Third, it seems pretty easy to pull off.

Of the many things I loved about this paper, I especially loved the very clear definition of interleaved practice and why it might work. Here’s what they say:

  • Interleaved practice is necessarily practice in choosing a strategy, not just practice executing that strategy
  • It’s also necessarily spaced practice, in other words rather than practicing the same skill three times on Monday it’s better to space it out over three separate days.
  • It’s also necessarily retrieval practice, which gets tossed around a lot but is used clearly by the authors to refer to practice recalling stuff from memory rather than getting the info some other way (like checking your work on the previous problem).

They also are not nutty; they get that blocked practice has its role. “Interleaved practice might be less effective or too difficult if students do not first receive at least a small amount of blocked practice when they encounter a new skill or concept,” they write. That seems correct.

(This is where the authors slot the literature on replacing practice with example or mistake analysis — alternating between examples and practice is for that initial experience, where a certain amount of focus on the same thing is absolutely necessary. The literature on worked examples and interleaving is sometimes seen in tension but I think this is probably the neatest way to resolve it.)

One question I still have is what this all looks like over the course of a unit, or over several weeks. The study required teachers to use nine worksheets over four months. So that’s about two days a month that are mainly devoted to review in this study. That seems doable. Except that I’m also the sort of teacher that uses a lot of low-stakes quizzes. How does that fit into the interleaved practice scheme? Should I count an interleaved quiz as interleaved practice — or maybe not, because of all the ways in which a quiz doesn’t give kids a chance to get help with problems they don’t remember how to solve?

I’m left with questions about how frequently to do a mixed review day, if I wanted to do things like they did in the study. But I think once every two weeks is sensible and doable, and I’m going to try to actually do that this school year.

II.

I’m reading this paper for kicks, mostly, but also with an eye on practice. I’ve known about the supposed benefits of interleaving practice for a long time but haven’t been entirely successful figuring out how to pull it off in practice. I teach four different courses and have limited bandwidth for making my own materials. I agree with the authors of this paper: “The greatest barrier to the classroom implementation of interleaved mathematics practice is the relative scarcity of interleaved assignments in most textbooks and workbooks.”

I promise I’ll get to the paper in a second, but first a quick note about the sentences that follows-up the one I just cited. They continue: “There are some remedies, however. For instance, teachers can create interleaved assignments by simply choosing one problem from each of a dozen assignments from their students’ textbook. Teachers might also search the Internet for worksheets providing ‘mixed review’ or ‘spiral review,’ and they can use practice tests created by organizations that create high-stakes mathematics tests.”

It’s entirely typical of research to never getting around to studying those remedies in any systematic way, but shouldn’t they? The work of translating something like interleaved practice into something workable in the classroom requires a lot of creativity. I know that there doesn’t seem to be anything interesting about that first suggestion (“choosing one problem from each of a dozen assignments from their students’ textbook”) but I find myself with questions: would these teachers be rewriting the problems? how do they choose the skills? do they do any blocked practice, and do they have a way of keeping track of which problems they’ve already used? are there clever ways of reducing the workload?

I get it, that’s just not what this study is. I have absolutely no issue with this study (which impressed me). But wouldn’t it be nice to read something as systematic and careful as this about the remedies that make translating this research feasible?

III.

The goal of this study was to test the feasibility of interleaved practice in realistic conditions. So, unlike a lot of research on practice, this was happening in classrooms. Actually, a lot of classrooms: 54 of them, all 7th Grade.

This study was preregistered, and you can tell, because the authors have a rationale for every single decision they made. It’s refreshing and entirely clear.

You’d think this would make the paper a tedious read, but quite the opposite, it felt like you were listening to actual humans explain their actual thinking. Honestly, I found it sort of gripping. I loved all the little touches.

They used statistical software in advance of the study to decide that they needed around 50 schools to trust their results. They ended up recruiting 54 schools, and paying them each $1000 to participate in the study (I wonder where that $1000 went) and then they had to recruit teachers. Each of the teachers got tossed $1000 also, and honestly that feels like a sweet deal. I would very much like to be paid $1000 for some researchers to write worksheets for me. They seemed to have no trouble finding 7th Grade teachers who wanted in on the study.

And now, for my favorite detail from the study:

“We recruited teachers who taught a seventh-grade math course described by the school district as Honors Advanced Grade 7 Mathematics.”

WOAH, huge red flag here. So this whole study was just with the top of the top of math students in the district?! I can’t believe that they would do this…

Oh, wait.

“Although its title suggests that the course is selective, it is the modal course for seventh-grade students at most of the schools in the district.”

So, this is hilarious. Some 7th Graders take Algebra, and those who failed the Florida assessment are in a different course. But the totally normal, typical 7th Grade math course in this district is titled Quantum Honors 7th Grade Category Theory for Gifted Youngsters. Florida sounds amazing.

They recruited only teachers with multiple sections of 7th Grade math. The researchers did the obvious thing of randomly assigning one of the teacher’s sections to the blocked practice condition and the other to the interleaved practice. (If you’re asking yourself how the researchers handled teachers with an odd number of sections, this is the paper for you.)

The worksheets seemed a nice size: 8 problems each. But then again they would be, because the authors did a pilot study with two experienced classroom teachers they have long-term relationships with. (This paper really is charming the pants off of me.)

They did thoughtful things that should help any skeptical readers. My favorite: both conditions had the exact same final worksheet before the exam. That way, no group of students could be said to have not seen the skills in a long time. (Otherwise, because of the nature of blocking, it would have been a couple months since students saw the practice problems for the first skill. Also this closely resembles the status quo practice of interleaved practice before the exam — though the gap of 33 days between review and test would be unusual in pretty much every class.)

Here’s the figure explaining what they did:

Screenshot 2019-08-19 at 8.52.42 PM.png

(Notice how the interleaved practice worksheets have a bunch of filler skills that are mostly blocked? And the blocked practice have a bunch of filler worksheets that are mostly interleaved? And that only the colored “core” skills were assessed on the test? That’s because the researchers didn’t tell the teachers what the experiment was about and wanted to make sure they couldn’t figure it out on their own. Clever!)

I complained above about wanting to know the practical details of designing these worksheets, but honestly it’s not that big a deal. It’s true, I’d rather not spend my planning time making new worksheets…but, yeah, I’m frequently making new worksheets during my planning time. The bigger question for me is about keeping track of how many skills there are in the course and all that, which I find logistically sort of complicated in the heat of the school year.

Given the particulars of my teaching situation, maybe the best thing to do would be to formally schedule a practice day every two weeks, and a quiz on the other weeks? Maybe on Fridays, which are breakneck and hectic for me?

IV. 

Whenever I read a paper I always feel like I owe it to myself to try to understand the trickier statistical points. This is inevitably embarrassing because I don’t understand these things well yet. Oh well.

First up:

“In order to determine the necessary number of participating classes, we conducted a priori power analyses with Optimal Design software. Each analysis assumed a two-tailed test with an alpha level of .05 and a two-level, random effects model for a continuous outcome variable. We ran numerous analyses with varying values of effect size and intraclass correlation, all of which were more conservative than the values obtained in the pilot study. In every scenario, power exceeded .95 with 30 classes (15 per condition).”

There are a lot of concepts here and I’m barely fluent in statistics-ese. I will try to translate the above paragraph into plain English-ese:

If we only used 10 schools, this study would have been underpowered. In other words, our study wouldn’t necessarily be large enough to detect the effect of the intervention, even if interleaved practice does help. So we used software to help us decide how many schools would need to be part of this study.

What went in to that calculation? First, we put in the standard “alpha” (which is our threshold for how unlikely it would be to get our effect randomly). As is (for better or worse) standard in many studies, that’s 0.05. (I don’t entirely get this yet, but power calculations are done in reference to the acceptable alpha. More here.)

We also had to consider the fact that even though our results are framed in terms of the test results of individual students, the intervention is taking place at the classroom level. Since students are grouped in classrooms we can’t just think of each additional student in the study as contributing equal amounts of random variation. Fundamentally, what protects us against making error is the chance for students to randomly deviate from each other — if their performances are correlated because they have the same teacher, we get more of a chance that we’re not seeing a real effect at all.

Instead, we told the software how correlated we thought the results of students in the same classroom would be. With that info we played around with the software to see what we would need to design a strong study. After tinkering with different inputs into the software, we settled on 50 classrooms.

(I found this example useful for helping me understand statistical power more clearly.)

Next up:

“Because of the cluster design, we further examined test scores by fitting a two-level model (students within classes) with HLM Version 7.03. Using restricted maximum likelikhood (REML) we first estimated a fully unconditional model to evaluate the variability in students’ scores within and between classes. To assess the difference between conditions, we used REML to estimate a two-level random-intercept model. Tests of the distributional assumptions about the errors at each level of the model (normality and equal variance) did not reveal any violations.”

My attempt:

The way we’re thinking about the results here is that there is an effect from interleaved practice, but this is an effect on the classroom. Then, within that classroom, there is random variation from the students .

Groups are funny things, statistically speaking. Sometimes one group can outperform the other but there is a tremendous amount of variation within the group. Suppose that we only looked at the classrooms to decide that interleaved practice was more effective — wouldn’t it still be possible that a lot of the students in the interleaved classroom did worse? And wouldn’t we want to know that?

Put it the other way, though, and we only looked at the students who received the interleaved treatment as one big group, ignoring the classes they came from. If they outperformed the blocked practice group, shouldn’t we worry that maybe this came from just one of the classes? Maybe a bunch of the classes couldn’t handle the interleaved condition at all, but there was one teacher who pulled it all together?

So what we do is we use two different models. First, we treat everyone like individuals and see how much variation there was with regard to scores on the test. Then, we look at the classrooms for the same thing. Finally, we combine the between-student variability into the classroom correlation.

We used a magic formula called “REML” to do this.

I tried, really, I did. One last effort:

The level-2 class model included a dummy variable for condition and 14 dummy variables for teacher effects. Before examining the main effect of condition, we evaluated the potential interaction between teacher and condition and found not statistically significant interaction effects, p>.05. We then tested a main effects model that evaluated the effect of condition, controlling for teacher effects, and we found a significant effect of interleaving (p < .001).

OK, I can’t do this one. Does this just mean that they checked to make sure there was no significant relationship between who taught the classes and the test scores? So that the interleaved practice results aren’t the result of perhaps a few stronger teachers ending up with more of the interleaved classes randomly?

V. 

I was recently hanging out with some teacher friends, and they playfully told me that I was a bit of an academic troll to Jo Boaler. But I have hidden this criticism at the bottom of the post, so you can forgive me for connecting this paper to another little critique.

I had raised some questions about this paper of Boaler’s, titled “Changing Students Minds and Achievement in Mathematics: The Impact of a Free Online Student Course.” The paper records an experiment, but it has a lot of things that made me nervous. Perhaps most mysterious is this very fishy table:

Why did 200 more students end up in the control group than the treatment? Why doesn’t the paper mention any of that, not even a tossed-off explanation? A possibility, though not the only one, is that there was significant attrition from the treatment group because of the difficulty of the treatment. That could potentially impact the reported results. What if it was the most difficult to manage classes that dropped out of the treatment, while their other classes stayed in the control group?

I don’t want to go in depth on Boaler’s paper, which is about something else entirely. But I think anyone interested in reading research could have some fun reading this interleaved practice experiment side-by-side with Boaler’s piece, because they make for a really rich compare/contrast pair.

They’re both experiments, but one is carefully, carefully constructed, and it’s convincing because they bring up potential issues and have an explanation for every decision they made. This other paper contains lacunae that never end up explained.

Research is ideally designed with the skeptic in mind. You’re supposed to be able to read research skeptically. The whole point of research is to be able to withstand that skepticism and leave with your thinking changed, in some way. This is partly a matter of design, but it’s largely a matter of the writing itself which should be clear, generous to the reader, and eager to raise concerns.

So: not all experiments are created equal, and not all papers are either. This interleaved practice is a good one for both.

For terrific math puzzles check out Erich Friedman’s website

In the last few days of camp this summer, a big folder of puzzles got posted in the hall. In the folder was a collection of Erich Friedman’s Hamiltonian Mazes puzzles.

Screenshot 2019-08-18 at 8.09.22 AM.png

These puzzles are terrific (hard!) and it’s just one of many many different types of puzzles and problems on Erich’s site.

The whole site is a terrific snapshot of the old internet, with its generosity and quirk. I love the little personal nuggets Erich includes on his homepage (I’m nostalgic for homepages!):

I am a Libertarian and an Atheist. I consider myself a Feminist, and I’m a member of the ACLU. I have memorized the first 50 digits of Pi. I am an INTJ and I Juggle. I build card houses, and I’m interested in my Family Tree.

For the record: I am a Capitalist and a Religious Jew. I consider myself a Feminist, and I’m a patron of NYPL. I recently memorized the Largest Known Prime Number. I sometimes get Moody and Sad but I don’t Juggle. I’m interested in my Family.

I own the largest Puzzle Collection in Florida.

Really, lots of wonderful stuff on his site. In the past I’ve been more interested in theory-laden areas of math than puzzles and problems (I like my math how I like my philosophy) but I always have fun when I do find time for these sorts of things.

Like these square tilings. They’re gorgeous!

s9

Anyway, a wonderful website. Enjoy!