Monthly Archives: May 2020
Good auto-grading
Some feedback makes kids give up, stop thinking, or feel bad. In my view, this is almost always feedback that doesn’t help learning. And this is why so much auto-grading is bad feedback — it doesn’t help learning. (In general, motivation is tangled up with success in ways it’s difficult to separate.)
Auto-graded work sometimes makes kids feel bad. This is when the auto-grading doesn’t lead to learning, or makes it seem like learning will be impossible. It’s not exactly mysterious why this is. Most of the time when a computer tells you that something is wrong, that’s it. So you’re wrong. What are you supposed to do with that information, as a learner? If you knew how to do it right, you’d have done it right.
What would the ideal learner do when they get the “wrong answer” info? In some cases, they’d take a close look at their steps and try to suss out the error, essentially discovering the correct way to solve the problem on their own. But in a lot of cases, kids get a question wrong because they don’t know how to do it, or they fundamentally misunderstand the problem. An ideal learner in that case would seek out the information they’re missing, from a text, a video, a friend or a teacher.
Auto-grading in my experience works best when it makes those ideal behaviors easier. I sometimes play around on the Art of Problem Solving’s Alcumus site, just for fun. It automatically tells me if I’ve said the right answer or not (though it gives me two chances and it lets me give up if I want). Then, there’s always a worked-out solution provided. It’s right there, waiting for me to read it. And then, it gives me a chance to rate the quality of the explanation (which I find empowering in some cases).
The first incorrect notice gives me a chance to discover my own mistake and learn something from it. The second incorrect notice gives me a chance to study an example. And then I have a chance to practice similar problems (because the computer will continue to provide them). It feels very oriented towards growth. I can’t solve every problem on that site right now, but I’m confident that with enough time I could.
Deltamath does this nicely as well, though of course not every student reads every explanation or watches every video. It works best in a classroom, where students can ask each other or me if they get something wrong — again, it’s using auto-grading in a context that makes it even easier to act as an ideal student would.
I’d also like to suggest that there isn’t a meaningful difference between auto-grading and a lot of the “insta” feedback that kids get in current Desmos activities. If a kid understands what a graph means, then they understand that their answer didn’t produce the correct graph. If they don’t understand why, you’ll see those same giving up activities that auto-grading can produce — or they’ll do guess and check with the graph until they get a correct answer, which in some cases is not a bad idea — get the answer, and then try to figure out why it’s correct. In either event, Desmos currently employs a great deal of de facto auto-grading in their activities.
One way Desmos could help is by making it easy for teachers to connect students to learning. You might make it easy for teachers to attach examples or explanations to a wrong answer. You might make it easy for students to ask the teacher a question via a textbox if they get an answer wrong and they can’t figure out the problem. You might enable teachers to include a brief explanation with the wrong answer, and then let kids rate the quality of that explanation. (Really, check out Alcumus.)
There are smart ways to do auto-grading, I think. The smartest way, though, is to make sure it’s happening in the context of a lot of interaction between students and a teacher.
(Cross-posted as a comment to Dan’s post.)