Is subject specialization in elementary school worth it?

The problem with elementary school, some might say, is that the teachers aren’t able to focus on mastering their subject. It’s not their fault! They teach everything. There’s no time to specialize. There’s no time to do anything. Do you have any idea how few bathroom breaks an elementary teacher gets during the day? Zero to one. So of course they can’t master the mathematics curriculum.

And in fact there is research showing that different teachers are more effective at teaching some subjects than others.

Here’s a simple solution: every child should have access to a math or reading specialist.

This is an idea that a lot of people love. For example, the National Council of Teachers of Math and a bunch of other organizations officially support the use of math specialists (though they note that math specialists can be used in many ways besides managing classroom instruction, e.g. as coaches, meeting with teachers or individual students, running professional development for the district, and so forth):

A new paper has added to the surprising(?) research literature that argues that this simple solution does not work. In fact, the teaching goes worse when you’re a specialist. It further suggests that the reason is because you lose out on the benefits of a stronger student-teacher relationship when you specialize. The paper is titled “Spread Too Thin: The Effects of Teacher Specialization on Student Achievement” and if you are inclined to read such things I encourage you to read it.

But, before you do, it might be helpful to list every reason why the researchers might be wrong. Why might researchers have messed this one up? Let’s list the ways:

  • Even if teachers are slightly worse at teaching when they’re specialists, schools might be making smart moves about who is best in the classroom and who is best teaching a subject, i.e. Michael isn’t our strongest teacher so we put him in a more targeted role where his knoweldge of math can be put to use
  • Every new professional role is hard, so teachers who become specialists will initially struggle but eventually get the hang of it
  • Schools that move to having specialists will uniformly improve but this won’t be captured by a change in any individual teacher’s effectiveness

To summarize, we want to know that there’s neither a school effect or a teacher effect. And we want to make sure that the researchers aren’t just capturing the difficulty of being effective when you’re transitioning to a new role.

Now, on to the paper.

If a policy paper has no experiment then it begins with a dataset. This one comes courtesy of the Indiana Department of Education, which provided researchers with data on students (4th and 5th Graders), their teachers, and whether their teachers were generalists or specialists over a seven year period. This amounts to 591,311 students and 32,996 teachers. (Co-teachers were excluded, because that’s hard to deal with. That was 10% of the teachers in the sample. Can you think of a reason why excluding co-teachers would mess this up? The researchers couldn’t and neither could I.)

Their primary interest is in the teacher effect. Since they aren’t running an experiment, they look at changes in a techer’s work status as the thing that could cause a change in learning. In their words: “we identify the effect of specialization by comparing the effectiveness of the same teachers in years when they do and do not specialize.” The key assumption being that these changes are not correlated with anything else, i.e. you don’t become a specialist once you’re suddenly great (or awful) at teaching, or you don’t become a specialist when your students suddenly become good (or bad) at reading.

Then they run a giant regression that looks to see what contribution the teacher’s role has in explaining student performance, over and above a bunch of other things.

(These “other things”: a student’s prior test scores, gender, race/ethnicity, whether they get free or reduced cost lunch, enrollment in special services, ELL, class size, whether a teacher has a graduate degree, whether a teacher is new to a school, how big the school is, percent of Black and Hispanic students, percent of students in the school who get free or reduced lunch, how effective this teacher was on average no matter the role, how well the grade scored on these tests on average, how well the school performed on these test on average.)

And then the researchers got nervous that they were only capturing the first year of a teacher’s transition, and maybe they got much better after that first year. So they created a model that paid attention to how many years of experience as a specialist the teacher had — one year, two years, three or more years — and took a look at whether experience mattered.

This is all good, but what about the school effects? Well, they wanted to look at that as well. “Though assigning a teacher to a specialist role may lower an individual teacher’s average effectiveness, students may still be better off if that teacher is better at a particular subject than the other generalists in the school,” they write. So they tested another model i.e. long equation:

I would like to emphasize that if you are familiar with the idea of linear regression these fancy equations should not intimidate you further. We are interested in the impact of the specialization rate of a given school in a specific grade at a time on the scores of kids in that time in that grade at that school. And beta is the slope, so if that slope is high then voila, the specialization rate explains a lot. If it’s low, it doesn’t. Everything else is in a sense the control — the researchers will measure the explanatory power (i.e. control for) all the stuff that I listed in that parentheses above. What we care about is the sope associated with the specialization rate.

Anyway, the results aren’t good if you like specialization. If you just look at specialization of teachers, there’s a significant reduction of teacher effectiveness, especially for math. If you separate by year, it’s clear that things get better when teachers are more experienced but not that much better.

When you compare schools that specialize more to ones that specialize less, the differences aren’t huge but those numbers all have negative signs in front of them which, if you like specialization, is not what you want to see:

They speculate that this could be explained by the way specialization reduces the strength of the teacher’s relationship with their students, and they have an interesting way to test it: look just at the students who happened to work with the same specialist two years in a row. They find that this reduces the costs of specialization compared to working with a generalist (though not all the way):

Do I believe these results? Yes, I do, for a few reasons. First, because this isn’t the first time researchers have found that specialization backfires in elementary school. There was an experiment in Houston a few years back that was particularly interesting with true random assignment and it didn’t work out. There was also a big study of schools in North Carolina that had negative results for specialization. Here are those studies described by the authors of this paper:

But the other thing is that I am a 3rd Grade math specialist. I work hard, and I think have a good understanding of the math and the kids. Yet I frequently come in and realize that I’m walking in to a situation I only partly understand. The kids main teachers have expectations and routines they work on all day with their students. They have different bathroom policies. Kids are working on things emotionally that I sometimes catch a bit of, things like “trying to deal with frustration productively” or “separating yourself from a tough situation.” And I don’t get to know any of this!

Moreover, these kids worship their teachers. They are the adults who care for these children during the day. Meanwhile I come in and it’s more of a gun-for-hire situation. I’m in, I’m out. It’s a very weird situation compared to the rest of my day.

So, no, I don’t find this surprising, and while maybe someday someone will figure out a way to get the benefits of specialization without the costs, I’ve seen enough for now: specialization in elementary school is not worth it.

Discovery learning vs. not discovery learning

I.

I think at this point, if you’re reading a math blog, you probably have an opinion about the place of discovery (or inquiry or guided inquiry or problem solving or whatever) versus fully guided instruction (or direct instruction or Direct Instruction or explicit instruction or Explicit Direct Instruction).

(By the way, Ed Realist does a nice job trying to clarify the terminological situation here.)

But the thing is that it is difficult to talk about this in a way that is clear and accessible. I was thinking about this while reading Jasmine’s latest post, which lays out what cognitive science researchers say on the matter. Jasmine and I are on the same page, and she is faithful to the researchers, but I felt myself inclined to express these views in a slightly different way. Not necessarily even better; just different.

(By the way, Jasmine is a first-year teacher and new blogger. She’s on the blogroll.)

So here is how I would put it:

Every mathematician and scientist, as far as I can tell, is clear about just how messy their research is. I am very fond of this account from mathematician Andrew Wiles:

Perhaps I could best describe my experience of doing mathematics in terms of entering a dark mansion. You go into the first room and it’s dark, completely dark. You stumble around, bumping into the furniture. Gradually, you learn where each piece of furniture is. And finally, after six months or so, you find the light switch and turn it on. Suddenly, it’s all illuminated and you can see exactly where you were. Then you enter the next dark room…

You think it’s true…then it’s not…then you waste a morning trying to prove something that in fact is not true and not strictly necessary for proving the thing you actually care about. Then you feel despair, so you take a break and do something else. A week later you come back and you feel stupid — the thing is now obviously true, and you know why — and that feels good! But that’s just Part 1. So on to Part 2…

Here’s the question, and it’s a fundamental one: do you think it’s a good idea to put your students in this situation, or not?

If you say “yeah! kids need to learn how to do this sort of thing” then you will be a fan of discovery and inquiry and problem solving and etc. If you say “wait, no, this doesn’t sound like a good way to make kids feel” then you will strongly dislike discovery and inquiry.

I feel as if that’s almost all there is to say. It pretty much comes down to that.

II.

There is of course a bit more, though. It’s probably easiest to present it in terms of a dialogue. Basically, cognitive science has a bunch of counter-arguments to arguments in support of the “yeah!” view above. Here’s how the dialogue goes.

Q: You don’t like discovery/inquiry/asking students to do math the way research mathematicians and scientists do?

A: That is right, I do not.

Q: But how will students learn to do research-y things if you don’t teach them?

A: “Do research-y things” is not really a skill. Neither is “creatively problem solve” or “think mathematically.” We don’t have evidence that any of these things can be taught to students, except alongside particular mathematical or scientific content. The things you really need to do research-y things that can be taught is a tremendous store of flexible, sturdy knowledge. That’s the best thing you can do to give your kids a leg up.

Q: But that’s demotivating! It’s boring to learn a discipline that way, and the genuine ways of learning are more motivating.

A: Bad teaching will always be demotivating, but there are lots of examples of the “boring” approaches being highly motivating. One way you see this is when an intervention measures affect, i.e. how kids feel about a thing.

But honestly if kids aren’t motivated, they won’t learn, and we have evidence that the more explicit approaches help kids learn. Shrug.

Q: So you think that kids never need a chance to apply their knowledge?

A: Yo, I did not say that.

Q: Yeah you did.

A: No, I did not. Here’s what I think. There’s evidence that when a student has less experience with something, they need a lot of explicit instruction about how to do that thing. Worked examples are a really, really sturdy format for people with little experience in a thing. If a student has never learned how to factor quadratic functions, a good way to start can be to show them examples of factoring quadratic functions. Then, ask them to use the example to solve a problem. And then show another example, and then give them some more practice. And then mix-up the practice, or ask them to apply what they’ve learned in a new context.

And then, the next day, do more stuff like that.

And then on the third day, maybe ask them to solve some problems on their own, and see how that goes.

And if it’s going well, who knows! In a week or so, maybe they’ll be ready to apply these skills to a challenging problem in class. Or maybe it’ll take a few weeks. The point is that as kids get more experienced with a set of skills, they are more ready to take on challenges.

Q: Thank you for saying “challenges,” I like challenges.

A: No problem. The point is where you start. And that’s genuinely controversial! But we believe (see: evidence) that starting with fully explicit instructions like worked examples gives newbies the help that they need. Starting a unit with a vague activity that students aren’t sure how to handle isn’t giving them the help they need.

Q: Does this take into account motivation?

A: No and yes.

No, it doesn’t take into account motivation. Do you have some amazing, super-motivating activity that kids love and that will super-charge a unit of study? Do you start a unit on quadratics with this amazing activity that helps the whole thing get started on a great note? No, the evidence does not take this into consideration. It just notes that it’s hard to find clear evidence of a learning benefit of this sort of thing.

But, yes, this does take into account motivation, because in the long-run there isn’t really any evidence that motivation is easily separable from achievement. So ultimately something like a worked example does a lot of good for motivation, because it helps struggling students understand the material and participate as an equal in your classroom.

Michael Pershan: Can I step in here for a second?

Q: Sure.

MP: I would only add that though there is no evidence for this, I do think a certain amount of variety is healthy in a classroom. Like, kids do get bored if you do the same thing day after day. But, two things. First, if you’re starting with something like fully worked examples and moving to interesting, challenging practice, your kids are getting variety. Second, go ahead, take a day and do something interesting and different. Variety is good! There are lots of interesting practice formats, though we don’t talk enough about that.

That’s all I want to add here.

Q: OK, but here’s the thing. I just want kids to be able to think like mathematicians/scientists in school. That’s the goal I care about. That’s what I think is most valuable. And I don’t even necessarily care if it is helping them do that stuff in the future. You tell me that these skills can’t be taught — OK! You also tell me that there is no real benefit to their skills from these kinds of experiences — that’s OK too! All that I want is for kids to be doing something meaningful in school. Yes, I want to make sure their test scores are OK and they can get into college, but beyond that I want kids to do something they care about. 

There are two answers to this last question, which is what I think this discussion sometimes comes down to.

  1. There comes a point where people just disagree on what they value. It’s hard to know what to say by the time someone gets to this point of clarity about what they care about.
  2. It’s a mistake to assume that regular, “boring” school is not meaningful. And it’s also a mistake to assume that regular, “boring” learning is not meaningful. As I’ve written in the past, mathematicians ask for help all the time, and a lot of cutting edge work is simply focused on understanding things, not on solving a particular problem.

But I guess I’d point out that there are three things going on.

There’s a certain picture of what research mathematicians and scientists do and what their culture is like.

There’s a view about what is most effective for learning and motivation.

And then there is a view about whether it sounds like a good idea to put kids under the conditions of researchers in class.

And cognitive science research is relevant for the second, the efficacy question. And there is a value question, about what you think is worth doing in school.

But for me the decisive point is that the work of learning skills and knowledge is meaningful, and you can see this also in the culture of mathematicians and scientists. It’s just not right that learning skills isn’t meaningful to students.

Equations and Equivalence in 3rd Grade

So I was stupidly mouthing off online to some incredibly serious researchers about equivalence and the equals sign and how it’s not that hard of a topic to teach when — OOPS! — my actual teaching got in the way.

I had done the right thing. In my 3rd Grade class I wanted to introduce “?” as a symbol for an unknown so I put up some equations on the board:

15 = ? x 5

3 + ? = 10

10 + 3 = 11 + ?

And I was neither shocked, nor did I blink, when a kid told me that the last equation didn’t make any sense. Ah, I thought, time to nip this in the bud.

I listened to the child and said I understood, but that I would like to share how it does make sense. I asked whether anyone knew what the equals sign meant, and one kid says “makes” and the next said “the same as.” Wonderful, I said, because that last equation is just saying the left side equals the same as the right side. So what number would make them the same? 2? Fantastic, let’s move on.

Then, the next day, I put a problem on the board:

5+ 10 = ___ + 5

And you know what comes next, right? Consensus around the room is that the blank is 15. “But didn’t we say yesterday that the equals sign means ‘the same as’?” I asked. A kid raised her hand and explained that it did mean that, but the answer should still be 15. Here’s how she wanted us to read the equation, as a run on:

(5 + 10 =  15 ) + 5

Two things were now clear to me. First, that my pride in having clearly and decisively taken care of this issue was misguided. I needed to do more and dig into this more deeply.

The second thing is that isn’t this interesting? You can have an entirely correct understanding of the equals sign and still make the same “classic” mistakes interpreting an actual equation.

I think this helps clear up some things that I was muddling in my head. When people talk about the need for kids to have a strong understanding of equivalence they really are talking about quite a few different things. Here are the two that came up above:

  • The particular meaning of the equals sign (and this is supposed to entail that an equation can be written left-to-right or right-to-left, i.e. it’s symmetric)
  • The conventional ways of writing equations (e.g. no run ons, can include multiple operations and terms on each side)

But then this is just the beginning, because frequently people talk about a bunch of other things when talking about ‘equivalence.’ Here are just a few:

  • You can do the same operations to each side (famously useful for solving equations)
  • You can manipulate like terms on one side of an equation to create a true equation (10 + 5 can be turned into 9 + 6 can be turned into 8 + 7; 8 x 7 can be turned into 4 x 14; 3(x + 4) can be turned into 3x + 12, etc.)

When a kid can’t solve 5 + 10 = ___ + 9 correctly or easily using “relational understanding,” this is frequently blamed on a kid’s understanding of the equals sign, equivalence or the particular ways of relating 5 + 10 to __ + 9. But now I’m seeing clearly that these are separate things, and some tend to be easier for kids than others.

So, this brings us to the follow-up lesson with my 3rd Graders.

I started as I usually do in this situation, by avoiding the equals sign. I find that a double arrow serves this purpose well, so I put up an arrow relationship on the board:

2 x 6 <–> 8 + 4

I pointed out that 2 x 6 makes 12 and so does 8 + 4. Could the kids come up with other things like this, I asked?

They did. I didn’t grab a picture, but I was grateful that all sorts of things came up. Kids were mixing operations nicely, like 12 – 2 <–> 5 + 5, in general it felt like this was not hard, kids knew exactly what I meant and could generate lots of ideas.

My next move was to pause and introduce the equals sign into this conversation. Would anyone mind if I replaced that double arrow with an equals sign? This is just what the equal sign means, anyway. No problem, that went fine also.

Kids were even introducing great examples like 1 x 2 = 2 x 1, or 12 = 12. Wonderful.

Then, I introduced the task of the day, in the style of Open Middle (R) (TM) (C):

IMG_3548.jpg

Yeah, I quickly handwrote it with a sharpie. It was that sort of day.

I carefully explained the constraints. 10 – 2 + 7 + 1 was a true equation, but wouldn’t work for this puzzle. Neither would 15 – 5 = 6 + 4. And then I gave the kids time to search for solutions, as many as they could find.

Bla bla, most kids were successful, others had trouble getting started but everyone eventually had some success. Here are some pictures of students who make me look good:

IMG_3554.jpg

IMG_3552-1961530330-1569718362990.jpg

Here is a picture of a student who struggled, but eventually found a solution:

IMG_3549.jpg

Here is a picture of the student from the class I was most concerned with. You can see the marks along his page as he tries to handle things like 12 – 9 as he tries subtracting different numbers from 12. I think there might have been some multiplying happening on the right side, not sure why. Anyway:

IMG_3553.jpg

The thing is that just the day before, this last student had almost broken down in frustration over his inability to make sense of these “unconventional” equations. So this makes me look kind of great — I did it! I taught him equivalence, in roughly a day. Tada.

But I don’t think that this is what’s going on. The notion of two different things being equal, that was not hard for him. In fact I don’t think that notion is difficult for very many students at all — kids know that different additions equal 10. And it was not especially difficult for this kid to merge that notion of equivalence with the equals sign. Like, no, he did not think that this was what the equals sign meant, but whatever, that was just on the basis of what he had thought before. It’s just a convention. I told him the equals sign meant something else, OK, sure. Not so bad either.

The part that was very difficult for this student, however, was subtracting stuff from 12.

Now this is what I think people are talking about when they talk about “relational understanding.” It’s true — I really wish this student knew that 10 + 2 <–> 9 + 3, and so when he saw 12 he could associate that with 10 + 2 and therefore quickly move to 9 + 3 and realize that 12 – 3 = 9. I mean, that’s what a lot of my 3rd Graders do, in not so many words. That is very useful.

So to wrap things up here are some questions and some provisional answers:

Q: Is it hard to teach or learn the concept of equivalence.

A: No.

Q: Is it hard to teach the equals sign and its meaning?

A: It’s harder, but this is all conventional. If you introduce a new symbol like “<–>” I don’t think kids trip up as much. They sometimes have to unlearn what they’ve inferred from prior experiences that were too limited (i.e. always putting the result on the right side). So you’re not doing kids any favors by doing that, it’s good to put the equations in a lot of different forms, pretty much as soon as kids see equations from the first time in K or 1st Grade. I mean why not?

Q: If kids don’t learn how equations conventionally work will that trip them up later in algebra?

A: Yes. But all of my kids find adding and subtracting itself to be more difficult than understanding these conventions. My sense is that you don’t need years to get used to how equations work. You need, like, an hour or two to introduce it.

Q: Does this stuff need to be taught early? Is algebra too late to learn how equations work?

A: I think kids should learn it early, but it’s not too late AT ALL if they don’t.

I have taught algebra classes in 8th and 9th Grade where students have been confused about how equations work. My memories are that this was annoying because I realized too late what was going on and had to backtrack. But based on teaching this to younger kids, I can’t imagine that it’s too late to teach it to older students.

I guess it could be possible that over the years it gets harder to shake students out of their more limited understanding of equations because they reinforce their theory about equations and the equals symbol. I don’t know.

I see no reason not to teach this early, but I think it’s important to keep in mind that in middle school we tell kids that sometimes subtracting a number makes it bigger and that negative exponents exist. Kids can learn new things in later years too.

Q: So what makes it so hard for young kids to handle equations like 5 + 10 = 6 + __?

A: It’s definitely true that kids who don’t understand how to read this sort of equation will be unable to engage at all. But the relational thinking itself is the hardest part to teach and learn, it seems to me.

Here is a thought experiment. What if you had a school or curriculum that only used equal signs and equations in the boring, limited way of “5 + 10 = ?” and “6 x ? = 12” throughout school, but at the same time taught relational thinking using <–> and other terminology in a deep and effective way? And then in 8th Grade they have a few lessons teaching the “new” way of making sense of the equals sign? Would that be a big deal? I don’t know, I don’t think so.

Q: There is evidence that suggests learning various of the above things helps kids succeed more in later algebra. Your thoughts?

A: I don’t know! It seems to me that if something makes a difference for later algebra, it has to be either the concept of equivalence, the conventions of equations, or relational thinking.

I think the concept of equivalence is something every kid knows. The conventions of equations aren’t that hard to learn, I think, but they really only do make sense if you connect equations to the concept of equivalence. The concept of equivalence explains why equations have certain conventions. So I get why those two go together. But could that be enough to help students with later algebra experiences? Maybe. Is it because algebra teachers aren’t teaching the conventions of equations in their classes? Would there still be an advantage from early equation experience if algebra teachers taught it?

In the end, it doesn’t matter much because young kids can learn it and so why bother not teaching it to them? Can’t hurt, only costs you an hour or two.

But the big other thing is relational thinking. Now there is no reason I think why relational thinking has to take place in the context of equations. You COULD use other symbols like double arrows or whatever. But math already has this symbol for equivalence, so you might as well teach relational thinking about addition/subtraction/multiplication/division in the context of equations. And that’s some really tricky, really important mathematics to learn. A kid being able to understand that 2 x 14 is equal to 4 x 7 is important stuff.

It’s important for so many reasons, for practically every reason that arithmetic is the foundation of algebra. I can’t list them now — but it goes beyond equations, is my point. Relational thinking (e.g. how various additions relate to each other) is huge and hugely important.

Would understanding the conventions of the equals sign and equations make a difference in the absence of experiences that help kids gain relational understanding? Do some kids start making connections on their own when they learn ways of writing equations? Does relational understanding instruction simply fail because kids don’t understand what the equations their teachers are using mean?

I don’t know.

Reading Research: A Randomized Controlled Trial of Interleaved Mathematics Practice

I. 

The study is called “A Randomized Controlled Trial of Interleaved Mathematics Practice” and that’s exactly what it is. It’s one of the most readable studies I’ve read in a long time — the writing is crisp and there is a minimum of technical concepts. Not all papers do a great job bringing up potential concerns or counterarguments, but this one definitely does.

The short version of the study is that interleaving practice — basically, every problem is of a different type than its neighbor — was very effective at helping kids do well on a test. This is a trendy thing in teaching, but probably for a good reason. First (and most importantly) it seems to work. Second (and still importantly) it’s entirely uncontroversial; nobody seems particularly committed to the status quo of blocked (i.e. repetitive) practice. Third, it seems pretty easy to pull off.

Of the many things I loved about this paper, I especially loved the very clear definition of interleaved practice and why it might work. Here’s what they say:

  • Interleaved practice is necessarily practice in choosing a strategy, not just practice executing that strategy
  • It’s also necessarily spaced practice, in other words rather than practicing the same skill three times on Monday it’s better to space it out over three separate days.
  • It’s also necessarily retrieval practice, which gets tossed around a lot but is used clearly by the authors to refer to practice recalling stuff from memory rather than getting the info some other way (like checking your work on the previous problem).

They also are not nutty; they get that blocked practice has its role. “Interleaved practice might be less effective or too difficult if students do not first receive at least a small amount of blocked practice when they encounter a new skill or concept,” they write. That seems correct.

(This is where the authors slot the literature on replacing practice with example or mistake analysis — alternating between examples and practice is for that initial experience, where a certain amount of focus on the same thing is absolutely necessary. The literature on worked examples and interleaving is sometimes seen in tension but I think this is probably the neatest way to resolve it.)

One question I still have is what this all looks like over the course of a unit, or over several weeks. The study required teachers to use nine worksheets over four months. So that’s about two days a month that are mainly devoted to review in this study. That seems doable. Except that I’m also the sort of teacher that uses a lot of low-stakes quizzes. How does that fit into the interleaved practice scheme? Should I count an interleaved quiz as interleaved practice — or maybe not, because of all the ways in which a quiz doesn’t give kids a chance to get help with problems they don’t remember how to solve?

I’m left with questions about how frequently to do a mixed review day, if I wanted to do things like they did in the study. But I think once every two weeks is sensible and doable, and I’m going to try to actually do that this school year.

II.

I’m reading this paper for kicks, mostly, but also with an eye on practice. I’ve known about the supposed benefits of interleaving practice for a long time but haven’t been entirely successful figuring out how to pull it off in practice. I teach four different courses and have limited bandwidth for making my own materials. I agree with the authors of this paper: “The greatest barrier to the classroom implementation of interleaved mathematics practice is the relative scarcity of interleaved assignments in most textbooks and workbooks.”

I promise I’ll get to the paper in a second, but first a quick note about the sentences that follows-up the one I just cited. They continue: “There are some remedies, however. For instance, teachers can create interleaved assignments by simply choosing one problem from each of a dozen assignments from their students’ textbook. Teachers might also search the Internet for worksheets providing ‘mixed review’ or ‘spiral review,’ and they can use practice tests created by organizations that create high-stakes mathematics tests.”

It’s entirely typical of research to never getting around to studying those remedies in any systematic way, but shouldn’t they? The work of translating something like interleaved practice into something workable in the classroom requires a lot of creativity. I know that there doesn’t seem to be anything interesting about that first suggestion (“choosing one problem from each of a dozen assignments from their students’ textbook”) but I find myself with questions: would these teachers be rewriting the problems? how do they choose the skills? do they do any blocked practice, and do they have a way of keeping track of which problems they’ve already used? are there clever ways of reducing the workload?

I get it, that’s just not what this study is. I have absolutely no issue with this study (which impressed me). But wouldn’t it be nice to read something as systematic and careful as this about the remedies that make translating this research feasible?

III.

The goal of this study was to test the feasibility of interleaved practice in realistic conditions. So, unlike a lot of research on practice, this was happening in classrooms. Actually, a lot of classrooms: 54 of them, all 7th Grade.

This study was preregistered, and you can tell, because the authors have a rationale for every single decision they made. It’s refreshing and entirely clear.

You’d think this would make the paper a tedious read, but quite the opposite, it felt like you were listening to actual humans explain their actual thinking. Honestly, I found it sort of gripping. I loved all the little touches.

They used statistical software in advance of the study to decide that they needed around 50 schools to trust their results. They ended up recruiting 54 schools, and paying them each $1000 to participate in the study (I wonder where that $1000 went) and then they had to recruit teachers. Each of the teachers got tossed $1000 also, and honestly that feels like a sweet deal. I would very much like to be paid $1000 for some researchers to write worksheets for me. They seemed to have no trouble finding 7th Grade teachers who wanted in on the study.

And now, for my favorite detail from the study:

“We recruited teachers who taught a seventh-grade math course described by the school district as Honors Advanced Grade 7 Mathematics.”

WOAH, huge red flag here. So this whole study was just with the top of the top of math students in the district?! I can’t believe that they would do this…

Oh, wait.

“Although its title suggests that the course is selective, it is the modal course for seventh-grade students at most of the schools in the district.”

So, this is hilarious. Some 7th Graders take Algebra, and those who failed the Florida assessment are in a different course. But the totally normal, typical 7th Grade math course in this district is titled Quantum Honors 7th Grade Category Theory for Gifted Youngsters. Florida sounds amazing.

They recruited only teachers with multiple sections of 7th Grade math. The researchers did the obvious thing of randomly assigning one of the teacher’s sections to the blocked practice condition and the other to the interleaved practice. (If you’re asking yourself how the researchers handled teachers with an odd number of sections, this is the paper for you.)

The worksheets seemed a nice size: 8 problems each. But then again they would be, because the authors did a pilot study with two experienced classroom teachers they have long-term relationships with. (This paper really is charming the pants off of me.)

They did thoughtful things that should help any skeptical readers. My favorite: both conditions had the exact same final worksheet before the exam. That way, no group of students could be said to have not seen the skills in a long time. (Otherwise, because of the nature of blocking, it would have been a couple months since students saw the practice problems for the first skill. Also this closely resembles the status quo practice of interleaved practice before the exam — though the gap of 33 days between review and test would be unusual in pretty much every class.)

Here’s the figure explaining what they did:

Screenshot 2019-08-19 at 8.52.42 PM.png

(Notice how the interleaved practice worksheets have a bunch of filler skills that are mostly blocked? And the blocked practice have a bunch of filler worksheets that are mostly interleaved? And that only the colored “core” skills were assessed on the test? That’s because the researchers didn’t tell the teachers what the experiment was about and wanted to make sure they couldn’t figure it out on their own. Clever!)

I complained above about wanting to know the practical details of designing these worksheets, but honestly it’s not that big a deal. It’s true, I’d rather not spend my planning time making new worksheets…but, yeah, I’m frequently making new worksheets during my planning time. The bigger question for me is about keeping track of how many skills there are in the course and all that, which I find logistically sort of complicated in the heat of the school year.

Given the particulars of my teaching situation, maybe the best thing to do would be to formally schedule a practice day every two weeks, and a quiz on the other weeks? Maybe on Fridays, which are breakneck and hectic for me?

IV. 

Whenever I read a paper I always feel like I owe it to myself to try to understand the trickier statistical points. This is inevitably embarrassing because I don’t understand these things well yet. Oh well.

First up:

“In order to determine the necessary number of participating classes, we conducted a priori power analyses with Optimal Design software. Each analysis assumed a two-tailed test with an alpha level of .05 and a two-level, random effects model for a continuous outcome variable. We ran numerous analyses with varying values of effect size and intraclass correlation, all of which were more conservative than the values obtained in the pilot study. In every scenario, power exceeded .95 with 30 classes (15 per condition).”

There are a lot of concepts here and I’m barely fluent in statistics-ese. I will try to translate the above paragraph into plain English-ese:

If we only used 10 schools, this study would have been underpowered. In other words, our study wouldn’t necessarily be large enough to detect the effect of the intervention, even if interleaved practice does help. So we used software to help us decide how many schools would need to be part of this study.

What went in to that calculation? First, we put in the standard “alpha” (which is our threshold for how unlikely it would be to get our effect randomly). As is (for better or worse) standard in many studies, that’s 0.05. (I don’t entirely get this yet, but power calculations are done in reference to the acceptable alpha. More here.)

We also had to consider the fact that even though our results are framed in terms of the test results of individual students, the intervention is taking place at the classroom level. Since students are grouped in classrooms we can’t just think of each additional student in the study as contributing equal amounts of random variation. Fundamentally, what protects us against making error is the chance for students to randomly deviate from each other — if their performances are correlated because they have the same teacher, we get more of a chance that we’re not seeing a real effect at all.

Instead, we told the software how correlated we thought the results of students in the same classroom would be. With that info we played around with the software to see what we would need to design a strong study. After tinkering with different inputs into the software, we settled on 50 classrooms.

(I found this example useful for helping me understand statistical power more clearly.)

Next up:

“Because of the cluster design, we further examined test scores by fitting a two-level model (students within classes) with HLM Version 7.03. Using restricted maximum likelikhood (REML) we first estimated a fully unconditional model to evaluate the variability in students’ scores within and between classes. To assess the difference between conditions, we used REML to estimate a two-level random-intercept model. Tests of the distributional assumptions about the errors at each level of the model (normality and equal variance) did not reveal any violations.”

My attempt:

The way we’re thinking about the results here is that there is an effect from interleaved practice, but this is an effect on the classroom. Then, within that classroom, there is random variation from the students .

Groups are funny things, statistically speaking. Sometimes one group can outperform the other but there is a tremendous amount of variation within the group. Suppose that we only looked at the classrooms to decide that interleaved practice was more effective — wouldn’t it still be possible that a lot of the students in the interleaved classroom did worse? And wouldn’t we want to know that?

Put it the other way, though, and we only looked at the students who received the interleaved treatment as one big group, ignoring the classes they came from. If they outperformed the blocked practice group, shouldn’t we worry that maybe this came from just one of the classes? Maybe a bunch of the classes couldn’t handle the interleaved condition at all, but there was one teacher who pulled it all together?

So what we do is we use two different models. First, we treat everyone like individuals and see how much variation there was with regard to scores on the test. Then, we look at the classrooms for the same thing. Finally, we combine the between-student variability into the classroom correlation.

We used a magic formula called “REML” to do this.

I tried, really, I did. One last effort:

The level-2 class model included a dummy variable for condition and 14 dummy variables for teacher effects. Before examining the main effect of condition, we evaluated the potential interaction between teacher and condition and found not statistically significant interaction effects, p>.05. We then tested a main effects model that evaluated the effect of condition, controlling for teacher effects, and we found a significant effect of interleaving (p < .001).

OK, I can’t do this one. Does this just mean that they checked to make sure there was no significant relationship between who taught the classes and the test scores? So that the interleaved practice results aren’t the result of perhaps a few stronger teachers ending up with more of the interleaved classes randomly?

V. 

I was recently hanging out with some teacher friends, and they playfully told me that I was a bit of an academic troll to Jo Boaler. But I have hidden this criticism at the bottom of the post, so you can forgive me for connecting this paper to another little critique.

I had raised some questions about this paper of Boaler’s, titled “Changing Students Minds and Achievement in Mathematics: The Impact of a Free Online Student Course.” The paper records an experiment, but it has a lot of things that made me nervous. Perhaps most mysterious is this very fishy table:

Why did 200 more students end up in the control group than the treatment? Why doesn’t the paper mention any of that, not even a tossed-off explanation? A possibility, though not the only one, is that there was significant attrition from the treatment group because of the difficulty of the treatment. That could potentially impact the reported results. What if it was the most difficult to manage classes that dropped out of the treatment, while their other classes stayed in the control group?

I don’t want to go in depth on Boaler’s paper, which is about something else entirely. But I think anyone interested in reading research could have some fun reading this interleaved practice experiment side-by-side with Boaler’s piece, because they make for a really rich compare/contrast pair.

They’re both experiments, but one is carefully, carefully constructed, and it’s convincing because they bring up potential issues and have an explanation for every decision they made. This other paper contains lacunae that never end up explained.

Research is ideally designed with the skeptic in mind. You’re supposed to be able to read research skeptically. The whole point of research is to be able to withstand that skepticism and leave with your thinking changed, in some way. This is partly a matter of design, but it’s largely a matter of the writing itself which should be clear, generous to the reader, and eager to raise concerns.

So: not all experiments are created equal, and not all papers are either. This interleaved practice is a good one for both.

An extremely brief summary of what I’ve learned about math anxiety and timed tests over the past few days

  • There are many studies that find math anxiety impacts how well a kid does in math. This includes performance on timed tests.
  • There are pretty much no studies that attempt to find evidence that timed tests contribute to math anxiety. (See this thread for the full conversation.)
  • There are a handful of studies (two that I saw) that surveyed teacher candidates and basically asked them what makes them anxious about math. Along with some other things like word problems, timed tests are implicated.

  • A few people made the argument that “timed tests cause math anxiety” is an untestable hypothesis because it’s unethical. A few researchers chimed in: not untestable in practice, with caveats. (Researchers love caveats.)

  • A few people wanted to know where the evidence was that timed tests don’t cause math anxiety. But that would necessarily involve the same sorts of studies that don’t yet exist; the studies that find a causal connection are precisely the ones that would be useful for showing there is no connection. Anyway, I wasn’t saying either of those things. I don’t have an opinion about the relationship between timed tests and math anxiety.
  • Well, OK, I have a few opinions.
  • A lot of people told me stories about the stress caused by timed tests. I hear you! Research isn’t the only thing that matters. We should keep telling our stories — about our children, experiences as students, what we’ve learned as teachers. True, it would be wise to hold off on the biggest and strongest proclamations (“WE KNOW THAT TIMED TESTS CAUSE MATH ANXIETY”) but just because something hasn’t been validated by research doesn’t mean that it’s not true.
  • But I’m suspicious of much of what YouCubed produces precisely because they present everything as a research result, an absolute law of Brain Science. When you look closer, the research results aren’t there — which isn’t to say that I, Michael Pershan, know that timed tests don’t contribute to math anxiety. Just that there’s a difference between what one thinks and what the research says.

Don’t ask “does it work?”

From Larry Cuban:

Do Core Knowledge Programs Work?

As for many school reforms over the past century, answering the “effectiveness” question–does it work?–is no easy task. The first major issue is answering the question of whether Core Knowledge was fully implemented in classrooms. If not completely implemented, then judging outcomes become suspect. Many of the early studies of Core Knowledge in schools were mixed, some showing higher test scores and some showing no positive effects (see herehere, here, and here). The Core Knowledge Foundation has a list of studies that they assert show positive outcomes. What is so often missing from research on reforms such as Core Knowledge are descriptions of the contextual conditions in which the reform is located and researchers saying clearly: under what conditions does this program prove effective? That is too often missing including the research on Core Knowledge schools.

The primary job of education research shouldn’t be to figure out what works, or to put it another way we shouldn’t expect the have a yes/no answer to that question. How does it work? When does it work? When it didn’t work, why didn’t it work? When it’s not used with fidelity, why wasn’t it used with fidelity? Education is not served well by the way research on program efficacy seems to frequently be done.

Did Common Core work?

Chalkbeat’s Matt Barnum does such a wonderful job reporting on edu research. Here he is, taking on a fascinating study that attempted to understand what effect the US’s Common Core standards had. Surprising results!:

How do you a study a policy as far-reaching as the Common Core, particularly one that was introduced alongside a host of other school reforms?

It’s not easy, but Song and her colleagues reasoned that some states were more affected by the switch to “college and career ready standards,” which meant Common Core in almost all cases. So they categorized states by the “rigor” of their previous standards and how similar those standards were to the Common Core.

They divided states into those more affected by the switch (because their prior standards were deemed less rigorous or less similar to the Common Core) and those less affected. Then they compared how each group’s scores changed on fourth and eighth grade NAEP tests between 2010 and 2017.

Common Core didn’t seem to help students’ scores, and over time the standards may have had an increasingly negative effect, according to the study, which has not been formally peer-reviewed.

Other researchers consulted by Chalkbeat, including Laura Hamilton of the RAND Corporation, say the study’s approach is a credible one.

“I’m not ready to conclude that the adoption of rigorous content standards is bad for student learning,” said Hamilton. “But I don’t look at this and think this looks totally wrong. It definitely looks plausible.”

Still, the approach has limitations. Most important is that the study is comparing two groups of states that adopted the standards — so if Common Core universally helped or hurt the states that adopted it, this study would miss that effect.

More, on an unpublished report:

Joshua Bleiberg, a doctoral student at Vanderbilt University, is also studying the impact of Common Core on NAEP scores. His study starts examining student scores when new standards hit classrooms, not when states decided to formally adopt the Common Core. He also excluded states that ultimately dropped the standards.

That all seems to lead to different results. In preliminary findings shared with Chalkbeat, Bleiberg finds that the Common Core had small positive effects on NAEP scores through 2013. His study has not been released publicly, so it can’t be fully examined.

“This is not going to be the type of thing that is going to turn around the whole ship really quickly,” Bleiberg said of the standards. “I would think about [the effects] as quite small.”

I don’t know what to make of all of this. I’m very worried that my internal compass has recently been getting out of whack, that I’m getting too pessimistic about any possible major change. At the same time I’m increasingly happy to talk about fundamental issues with the way things are done. (For example, that math is frequently not serving kids particularly well in high school.) But — back to the pessimism — there isn’t much to do about.

I guess I try to keep both of those ideas in mind at once, and to work in the tension between them.

Addendum: Though! Check out another piece of Matt Barnum reporting, about how reducing pollution increased test scores.

The situation seems to be that ed reform is less effective than you think it’ll be, but that improvements to quality of life tend to have more educational implications than you’d think. (The main driver of US educational improvements? GDP.)

My ed reform platform: capital improvements to schools, air conditioning in every school, better food for kids, cut pollution.

Modernism in Mathematics

Jeremy Gray makes the case (in here) that modernism applies to mathematics. His modernism consists largely of a move away from representations and towards formal approaches.

So on Lebesgue’s theory of the integral in 1903:

“The axioms specify what the integral is intended to do. They do not start from an idea that the integral is about, say area, or any other primitive concept. It is necessary to show that there is a model of these axioms, but once that is done it is at least possible to prove properties of the integral directly from the axioms and without reference to any model of them. The axioms are sometimes said to define their object implicitly, or to create it. There is no reference to a primitive concept available via abstraction from the natural world.”

And on Kronecker and Riemann:

“Neither man suggested that objects cannot be studied via their representations, but both believed that one must be vigilant to ensure that one establishes properties of the objects themselves and not the properties of merely this or that representation, and to this end it was best to avoid explicit representations whenever possible.”

I didn’t know about the Hausdorff paradox, which feels a lot like Godel. Gray’s summary: “on any plausible definition of the measure of a set there must be non-measurable sets.”

Borel ended up critiquing the use of the axiom of choice to call the paradox into question, but this was another step (apparently) in pushing people to accept that definitions of area are inherently imperfect — pushing us further away from meaning and belief in the representations.

Another interesting point from Gray: you know that thing about the unreasonable effectiveness of math? That wouldn’t have made any sense in the 19th or 18th centuries because math was coextensive with science. Like, there’s nothing surprising at all about the connection between math and the world back then, because math was an attempt to describe the world.

I’m interesting to read more, but I’m feeling as if a question has been answered. Whether we call it modernism or not, this is the time in the history of math when the connection between mathematics and the empirical world was made problematic. If we’re looking for the origins of the idea that math is “useless,” it’s going to be in this movement in mathematics between 1880 and 1920.

Things that I’d like to read: on modernity and mathematics

The world has changed immensely over the past several hundred years. Mathematics has too. Are these changes all related?

Plato’s Ghost looks like a good place to start.

Plato’s Ghost is the first book to examine the development of mathematics from 1880 to 1920 as a modernist transformation similar to those in art, literature, and music. Jeremy Gray traces the growth of mathematical modernism from its roots in problem solving and theory to its interactions with physics, philosophy, theology, psychology, and ideas about real and artificial languages. He shows how mathematics was popularized, and explains how mathematical modernism not only gave expression to the work of mathematicians and the professional image they sought to create for themselves, but how modernism also introduced deeper and ultimately unanswerable questions.

Building on Gray’s work is this presentation by Susumu Hayashi, which introduces (to me at least) the notion of “mathematical secularization.”

Screenshot 2019-02-13 at 11.38.10 AM.png

I also came across The Great Rift.

In their search for truth, contemporary religious believers and modern scientific investigators hold many values in common. But in their approaches, they express two fundamentally different conceptions of how to understand and represent the world. Michael E. Hobart looks for the origin of this difference in the work of Renaissance thinkers who invented a revolutionary mathematical system—relational numeracy. By creating meaning through numbers and abstract symbols rather than words, relational numeracy allowed inquisitive minds to vault beyond the constraints of language and explore the natural world with a fresh interpretive vision.

The focus is on early modernity and the shift to algebra, which is an earlier phenomenon than modernism. But maybe it’s part of the same story?

Also in the category of “is this related? maybe??,” there is a working group of philosophers that call themselves the Mathematics, Mysticism and Secularization working group.

There are all these -isms that I learned about in philosophy of math: empiricism, logicism, formalism, fictionalism (wiki). That’s part of this story too.

What I’m attracted to is the idea that math is as much a part of culture as anything else. Over the last few centuries Western society has gotten less and less comfortable with the abstract, invisible realm of religion and spirits. Wouldn’t that have an impact on how that culture thinks about that other invisible, abstract realm of mathematics?

People used to think of x^2 as referring just to a square’s area, but then it was emptied of that meaning. Is the break of algebra from geometry something like the break of philosophy from theology?

People used to think that mathematics was a search for ultimate truths, not just conditional ones. Are we living in a mathematically relativistic world?

Mathematicians sometimes talk — with pride! — of the uselessness of their work. Is that the end result of the sorts of processes described by these authors?

I have no clue, and I have no idea when I’ll be able to read those books. But the questions seem interesting and confusing.

Motivation

My interest in any particular piece of mathematical content varies. There are some things that I think are just absolutely fascinating, others…nah. So I think if I was, at bottom, motivated by a love of mathematics then my teaching would ultimately suffer for it.

Working with children, the future looms: I want these kids to have skills, to pass the tests, to get into schools, to get through college, to find meaningful work, to be able to see the world differently in the (hopefully) long, long time they’ll spend outside of schools. But when you take a look at what is genuinely preparatory out of what we’re supposed to teach, it’s hardly everything. People say that school math is mostly useless, and I do see what they mean.

So the answer for me is in the present, and I don’t worry too much about everything else that’s going on. The situation is that four or five times a day, a bunch of people get into a room and are supposed to study mathematics. I honestly don’t know why they’re supposed to study mathematics, but that’s just it: they are.

They are in this room, and I am also in this room and am supposed to help them learn it. If I quit tomorrow, someone else would do it, but I haven’t quit and I am in the room.

So given all this, the question for me is always, how can I do my job without making anybody feel dumb or miserable? And that gets me pretty far.