Update: This post describes an error in a paper that now appears to have been corrected. See the update at the bottom of the post.
Along with however many people are on YouCubed’s email list, I recently received this message from Jo Boaler:
Hello youcubians,
Three years ago we reached out to everyone who had taken Teaching Mindset Mathematics, a workshop we offer to share the curriculum, ideas and teaching practices from our 2015 summer mathematics camp for middle school students. We asked if they were planning to host a youcubed-inspired summer camp in 2019, and if so, whether they would be willing to participate in a study of student outcomes and teacher change. We heard from many people who were interested in engaging with us and ultimately enrolled 10 different camps and their school districts into our study, sponsored by the Gates Foundation.
The demonstration camp that youcubed hosted in 2015 had resulted in significant improvements in student mathematics understanding and mindsets, and the purpose of the new study was to learn whether this could be replicated by teachers across the US in a variety of settings. Today, we are very excited to announce that the results of this research have been published in the journal Frontiers in Education. I would encourage everyone to read the full study, which can be found here.
So, a published study funded by the Gates Foundation. I clicked and opened up the paper.
Look, you know what’s coming at this point, right? We’ve been down this road before. I’m going to show you that the paper is a mess. And who cares? Nobody. People who care about research, including a great number of math education professors, do not have respect for Boaler’s output. YouCubed’s popularity stems I think from consultants, PD leaders, and from teachers themselves, not from the math education research community. Frankly, I think YouCubed would be just as popular if they never performed any research whatsoever. It’s totally inessential. As am I. As is this post. What am I doing with myself? How am I choosing to spend my life? That’s how I always end up feeling when I write about YouCubed.
Anyway, back to the paper, this is what they did:
A matched comparison analysis was employed to assess the effect of the approach on students’ achievement. School districts provided a variety of achievement measures of both participant and non-participant students (GPA and MARS scores, before and after camp participation; and a baseline math standardized test score), and a battery of control variables (race, ethnicity, gender, free and reduced-price lunch status, English learner status, and special education status).
OK basically the research team couldn’t run an experiment where they randomly place kids in these camps, so they did a very reasonable next-best thing: they created a control group, of sorts, by matching each child in their camp with a child with similar stats who didn’t go to the camp. So if Michael is a camper and has a GPA of 2.5 and a MARS score of whatever, etc., they tried to find a student Jackson who was not a camper but with the same GPA and MARS score. If Michael (and all the other campers) suddenly are performing much better after the camp than the “control” group then maybe the camp is responsible for it. Or maybe it’s some subtle selection bias.
This is all totally reasonable but in the end totally irrelevant for the issues I’m going to point out. I’m including it because, why not?
By the way, the details of this matching and a quantitative comparison of the groups are not detailed in the paper. Normally a paper would do that, but again that’s not so important in light of what I’m about to say.
OK, so we get to the part of the paper where we’re seeing how campers did compared to their control group. First issue is that in this table it looks like the “non-participants” in the matched group had a higher GPA after the camp than the campers.
The body of the paper says, “This analysis showed that at the end of the first term or semester back at school, the students who attended the youcubed summer camp achieved a significantly higher mathematics GPA (p < 0.01, n = 2,417).” Now this is the exact opposite of what the table says, which is a problem, because either the table is wrong or the sentence is false. And, honestly, I assume that the table is wrong and the entire thing is mislabeled. OK.
The paper goes on: “On average, students who attended camp had a math GPA that was 0.16 points higher than similar non-attendees (i.e., students from the same district and grade and who had a similar baseline math GPA and test score) (Table 4).”
This is a problem because the table also does not say this. The difference between a GPA of 2.679 and 2.622 is 0.057 points. That is not 0.16 points. What am I doing, providing this level of scrutiny to this paper, i.e. basic scrutiny? Shouldn’t I be writing a book? Doing math? Reading something? Was I put on this Earth to fact check these papers, providing infinitely more care than the authors themselves apparently did?
Anyway, presumably the issue is that the body of the paper misinterpreted its own table. The 0.16 is the effect size, provided (and clearly labeled) on the right side of the table, not the GPA.
And now a bit more from the paper: “In addition, compared to control students, camp participants were 6 percentage points more likely to receive a grade of B or higher, and 5 percentage points less likely to receive a grade of D or lower (Table 4).”
This is not right, again I think they must be rounding up the effect sizes and reporting them as a percentage change. Silly stuff, quite honestly, but none of it matters at all, I don’t know.
By the way, there is no supplemental information posted, despite this statement.
So, I don’t know. I wrote to corresponding author a month ago but haven’t heard back, I’m sure he’s busy and I’m not sure what he would say.
It’s absurd that this was published as is, and it obviously doesn’t speak well for YouCubed or Boaler that this was pushed through. I…I am unsure what any of this means. Like, yes, it’s bad. But in a way it’s so bad that it doesn’t really touch her core areas of advocacy. I wish the research she was attempting was better because then it would actually be about something. Instead it’s about nothing besides YouCubed and Boaler herself, who has managed to become the popular voice for math education research in this country while showing complete disdain for the work of research itself.
I need to find a better hobby.
Update 2/28/22: The paper appears to have been corrected with a new Table 4. No explanation at the page explains the update or the source of the error, and I’m not sure why the effect is reported to more decimal places than the means:
The real question is whether the authors of the paper was part of the treatment group or the control group, because they clearly don’t know any math or data science.
Nice catch! There are so many more discrepancies like the one you just posted. Is anybody reading these (besides you?) Is there any office of academic integrity at Stanford that can be notified? I know some mathematicians tried that a while ago, but maybe it’s time again. I mean, this is embarrassing for Stanford. I almost feel bad for them.
Btw, isn’t the matched comparison analysis you mention the same shenanigans the CREDO (again, at Stanford) people did recently, comparing “virtual twins”? Again, at what point do the higher ups at Stanford take notice?
I went to the link given in this blog to the original paper on feb 28, 2022. Table 4 is different than what is in the blog, showing 2.62 for Camp and 2.46 for matched group. Apparently, it was a table error which has been corrected in the version reached by the link in this blog. Perhaps your catch of the error helped.
Yes, it has been updated! I should update the post to reflect this.
I wish there was some sort of explanation. The journal has a place on the page to indicate the updates have been made to the paper, but nothing is mentioned. It would also be good to have the supplementary information link work. And I don’t understand why the effect is more precise than the mean GPAs provided.
But hopefully it was just a simple error with the table that somehow slipped by everyone, and maybe the blog did help. (Or maybe not! Who knows?)