Why should mindset and purpose interventions work equally well?

Screenshot 2018-06-06 at 10.08.41 PM

This figure is from a 2015 paper, Mind-Set Interventions Are a Scalable Treatment for Academic Underachievement, and it comes out of the PERTS group, which generally does great work (as far as I as an outsider can tell).

There’s something fascinating about this study. I think, very quietly, their work represents a conceptual shift in research on mindset. The move is away from mindset and toward interventions as the main object of study.

I have Carol Dweck’s Mindset book, and it’s pretty clear that for her mindset is supposed to be a uniquely powerful force in our motivational psychology. It is the key. There really are two types of people: people who view intelligence as fixed or malleable, and this is a major factor in your motivation and subsequent success in a variety of arenas.

But check out this 2015 paper and check out that figure — there are two interventions that they tested, and only one of them has anything to do with mindset. First, the typical implicit theory of intelligence intervention:

Growth-mind-set interventions convey that intelligence can grow when students work hard on challenging tasks—and thus that struggle is an opportunity for growth, not a sign that a student is incapable of learning.

But then there’s the sense of purpose intervention which has nothing to do with the malleability of anything:

Sense-of-purpose interventions encourage students to reflect on how working hard and learning in school can help them accomplish meaningful goals beyond the self, such as contributing to their community or being examples for other people.

The theory that supports this intervention is entirely unrelated to growth mindset theory. It takes no position on whether someone thinks of human attributes as essentially fixed or malleable. If you thought that growth mindset was a hugely impactful factor that governs motivation, there’s no reason at all why you’d think a sense of purpose intervention works.

(There’s a reeaaaal cool move when the authors call both of these “academic mindset interventions” in that paper.)

And the results of this study found that both of these types of interventions worked about as well as each other. And their benefits didn’t seem to combine, which is also interesting, because why wouldn’t they, if they’re separate motivational concerns?

One possibility: people tend to be demotivated because of theory of intelligence or because of absence of purpose, but not by both. Another possibility is that demotivated people tend to be equally motivated by either intervention.

(I imagine there’s a lot of ways to sort this out with the data they’ve already collected. Which intervention works better for students assessed as having a fixed mindset?)

The second possibility — that both interventions work equally well for at-risk students — would represent a really interesting possibility, which is that the theory behind the mindset intervention doesn’t matter a ton. What if all this under the hood theory doesn’t matter a great deal? What if motivational interventions and their design is the thing worth studying, and the basic theory underlying them doesn’t matter a great deal?

If it’s true, this would make a great deal of sense to me. Dweck’s mindset theory would have not predicted that you could get the same results with an intervention like sense of purpose that uses an entirely different mechanism. (People who underwent the purpose intervention didn’t have changed beliefs about intelligence — they checked.) Mindset was supposed to be the big thing. The fact that it’s being considered as part of a menu of motivational interventions along with purpose seems significant. We’ve already moved most of the way away from seeing it as a uniquely powerful theory for explaining motivation.

And maybe the authors are saying as much in their paper. After all, it seems that now a mindset researcher doesn’t study “mindsets” at all but “mindset interventions,” which is a totally different thing.

I eagerly await something that will help clarify things. Speaking of, does anybody have a copy of this preprint? I wish I’d held on to it before it was taken down. (Update: oh, I think this is it. If so seems like sense of purpose interventions weren’t in play.)

The intellectual work that teachers can do but researchers probably can’t

[I’ve written versions of this post many times before. Here here here. Don’t read those, this version is probably better.]

Tomorrow night, I’m going to teach teachers about teaching. I think a legitimate question is, on what grounds am I claiming to know anything about teaching at all?

To be sure, I am pretty confident that I know something very important about my own teaching — in my school, with my students, in my courses, given my personality, etc. I observe my own classrooms (imperfectly) every day. The cumulative evidence of all that observation makes me pretty (not fully) sure that I’ve figured something out.

But the tricky thing about teaching is that this stuff often doesn’t translate to other situations. Just because something works in my classroom (according to me) doesn’t mean that it’ll work in vastly different contexts. To get really specific about this for a second, I teach students who are among the wealthiest children in America. This reality impacts my school in a bajillion ways. Who says that my dumb ideas about feedback will mean anything to the other teachers in my department. Teachers in other schools, and especially high-poverty schools? Forget about it.

(To be fair to myself for a second: I haven’t only taught in my school.)

The point is that there are obvious reasons to doubt that the things I think I know are really truths of teaching. This is even true if we move past the particular practices that I advocate and get behind the thinking and values that support those practices. I think I have a useful way of thinking about teaching, or I think I’ve identified some value that is important for the student experience. Who says that this is anything but my own thinking?

This is the natural place that research on teaching enters the conversation. Whatever you want to say about research, it’s not about my classroom. In general, it’s about forming generalizations in a way that improves upon (e.g.) my ability to make stuff up about my teaching.*

There are lots of interesting edge cases to consider, but I think the generalization about generalizations stands. Researchers might write cases grounded in particulars or engage in a teaching experiment, but the point of those is to contribute to the formation of generalizations that are broadly useful. 

This is getting pretty abstract so let me just get to the point: could researchers ever respect the generalizations that teachers make about teaching as knowledge that stands on par with their own?

The usual way of talking about teacher/researcher parity is to say that researchers excel at making generalizations, while teachers contribute crucial local knowledge. And it’s totally true that teachers do have local knowledge.

But does this really create parity between teachers and researchers? The whole point of broadly useful knowledge about teaching is that it goes beyond local knowledge — it makes a generalization. If what teachers can contribute is local knowledge, then I think we’re just saying that teachers are at best a source of data to the researcher. The teacher inputs local knowledge, the researcher generates broadly useful generalizations.

It’s true that there’s no reason to inherently value general vs. local knowledge, so in a certain sense there can be parity between teachers/researchers. But at the end of the day, what’s broadly useful are generalizations, and teacher knowledge can’t really compete with what researchers contribute.

Or…can we?

I want to speculate a bit about some different ways of sorting out the relationship between what researchers and teachers can contribute. To start, I want to ignore local knowledge for a second and talk about how teachers contribute to generalizations, i.e. researcher turf.

I have a few ideas here, and they’re very rough, so bear with me.

First, researchers are institutionalized while teachers are necessarily amateurs at producing generalizations. The relationship between teachers/researchers can then be folded into the general relationship between amateurs and experts. And, of course, we need experts. But the ecosystem isn’t healthy if it’s entirely populated by experts.

Amateurs play a lot of important roles, even when it comes to forming generalizations that are broadly useful. Here are a few that I’ve read about (too lazy to cite right now):

  • Amateurs can disrespect the boundaries of fields or sub-professions and put together ideas that from an institutional perspective are incongruous
  • Amateurism is in general lower stakes/lower payout than being an expert. If I’m an amateur and my ideas are wrong or useless, my career isn’t on the line. So there’s a way in which amateurs can attend to riskier ideas, or work on lines of thought that are perceived to be less rich in reward or are in general undervalued.
  • Amateurs play an important role in teaching and spreading expert generalizations, but in doing so amateurs often simplify or otherwise improve the results of experts in significant ways.

But this way of framing things — teachers as amateurs, researchers as experts — doesn’t really leave room for teachers to ever get institutional respect from experts as generators of generalizations about teaching, and two further points on that:

  • this is probably true
  • this is much more exciting to me than institutional respect

It’s not good for my $$$, but I am really quite fine accepting the role of an amateur in all this. It’s exciting to try to smash fields together and to not be beholden to conventional wisdom in the field. I can chase ideas about teaching, throw them out there for others and see what resonates for others. We make up the rules as we go. It’s fun!

That’s the spirit in which I’m going to teach this class tomorrow night. I’ve got this stuff I’ve figured out about teaching. I don’t want to make myself sound like a tin-hat Alex Jones-type, but I do think that what I’ve learned about teaching goes against a certain conventional, institutional, expert way of thinking. And it is the result of mashing up a bunch of things — trial-and-error in the classroom, reading research, experimenting by giving presentations to teachers. And if it’s not broadly useful as a generalization about teaching? Hey, that’s OK too. There’s very little at stake here.

So, I’m not an expert, and neither are you: maybe these ideas are useful to you? Let’s find out. That’s the way I approach this stuff right now, as an amateur.

What is retrieval practice when you’re learning math?

I’ve never really carefully read the retrieval practice literature, but I think it gets confusing when people talk about retrieval practice when talking about math skills, as opposed to mathematical facts.

Here is the description from @poojaagarwal‘s website, committed to promoting retrieval practice among practitioners:

Retrieval practice is a strategy in which calling information to mind subsequently enhances and boosts learning. Deliberately recalling information forces us to pull our knowledge “out” and examine what we know. For instance, I might have thought that I knew who the fourth U.S. President was, but I can’t be sure unless I try to come up with the answer myself (it was James Madison).

But how does this apply to math skills? Can trying a problem (i.e. practicing the skill) ever count as retrieval practice? Does it make sense to use the metaphor of ‘calling information to mind’ to describe what’s going with skills practice?

I think not. But I also am finding retrieval practice useful in my lesson planning. There is a great deal of knowledge that is useful for students to know when they’re learning something new. This sort of knowledge is the sort of thing that I’d like my students to know (i.e. retrieve from memory), more than I’d like them to derive.

Often, at the beginning of class, the first thing I ask my students to do is to remember some facts that they may (or may not yet) know from memory. Some constraints:

  • I don’t ask students to solve a problem and call it retrieval practice — that’s skills practice, not retrieval practice, and tickles other parts of the mind.
  • I only ask students questions that I think they could remember, even if it might be difficult to recall these things. Ideally, these would be things that either students could derive if they can’t recall them.
  • Because stuff from the last few days of class can often get forgotten really quickly, I often use these prompts to strengthen the memory of what we’ve recently done. (The prompt “Summarize what we did yesterday” is surprisingly difficult!)

Here are some prompts I’ve recently used with students:

“Draw a pair of ramps that are pretty close to being of equal steepness.”

“Write an equation of a quadratic, describe what it would look like.”

“What happens when you use the tan button on the calculator? Give some examples.”

“Write several pairs of decimals, and write the number that is between them.”

The truest ‘retrieval practice’ of these is the one about the tan button. Next in line is the one about the equation of the quadratic, since I’m prompting kids to remember what the features of the graph are (though it’s also skills practice). What made me think about these as retrieval practice is that they were all calling back on the previous day’s class.

Here are some purer examples of retrieval practice prompts in math:

“What’s the Pythagorean Theorem?”

(If a specific procedure is supposed to be known for converting a decimal into a fraction:) “How do you convert a decimal into a fraction?”

etc.

As I’m messing around in graph theory, I’m noticing that there are a lot of things that would be useful to remember — particular proofs that could serve as paradigms, constraints (in the form of inequalities) on possible planar or non-planar graphs, theorems, specific graphs that are useful examples, etc. If I had a teacher of graph theory, I’d want that teacher to prompt me to remember these things so that I could have more of them available as resources when I’m trying to learn something new or do some creative proving or problem solving.

(I should probably bust out some flash cards at some point…)

As an aside, I think that retrieval practice is sometimes mixed up with spaced practice, but I think these are different things. Spaced practice might be a better fit for what people are describing when they talk about intentionally building time-separated practice of skills into their courses and assignments. I think this requires a different sort of finesse than retrieval practice, though, as the problem with spaced practice is making sure students have something productive to do if they’ve actually forgotten the material.

Excerpting Freddie de Boer’s defense of the SAT

Just read the whole thing here, but I’m disposed to agree with much of it. (Though check the bottom of the post for a research piece that makes me doubt my instincts here.)

The SAT does not enjoy a good reputation among progressives. Arguments against the use of the test, as well as its analog, the ACT, abound. Both are widely derided as tools of elitism, rejected as culturally biased, and denounced for dehumanizing test takers.

I understand the intuitive feeling that we should not reduce human potential to a test score. And the major testing companies (and nonprofit organizations like the Educational Testing Service, which basically function like companies) are not particularly sympathetic entities. But if you believe in equality and a more level playing field in college admissions, you should defend the SAT.

Coaching doesn’t work so well:

Critics of standardized tests often complain that affluent students have greater access to test prep materials and coaching. This is indeed a concern, but the research here is clear: coaching services produce far smaller gains than those advertised by the big test prep companies, which routinely claim triple-digit improvements.

2006 meta-analysis found that students retaking the SAT after coaching resulted in, on average, an increase of about 50 points on a 1600 scale. That’s not an insignificant number. However, as the researchers point out, we can expect some of that gain to occur simply through increased familiarity with the test and, for lower-scoring students — the type most likely to retake the test — regression to the mean. More recent research found that, after using statistical controls to compare similar students, the combined effect of coaching on a 1600 point scale was about 20 points.

Other metrics for judging admissions are easier for the well-off to game:

Detractors of entrance exams often argue for more “holistic” methods of evaluating students than tests, pushing for greater emphasis on student activities, college essays, and letters of recommendation. They argue that these things allow them to select students that are more than just grades and test scores and build a diverse student body. As Jennifer Finney Boylan put it in a piece decrying the SAT, the only way to fairly choose between applicants is “to look at the complex portrait of their lives.”

But this reasoning goes directly against the stated goal of equality. It should be obvious: affluent parents have far greater ability to provide opportunities for extracurricular (and frequently out-of-school) activities than less affluent parents do.

According to this Malcolm Gladwell piece (caveat lector) extra-curriculars were brought in to college precisely to exclude groups (like Jews) who were academically high-performing but not the ideal candidates for admission:

The difficult part, however, was coming up with a way of keeping Jews out, because as a group they were academically superior to everyone else. Lowell’s first idea—a quota limiting Jews to fifteen per cent of the student body—was roundly criticized. Lowell tried restricting the number of scholarships given to Jewish students, and made an effort to bring in students from public schools in the West, where there were fewer Jews. Neither strategy worked. Finally, Lowell—and his counterparts at Yale and Princeton—realized that if a definition of merit based on academic prowess was leading to the wrong kind of student, the solution was to change the definition of merit. Karabel argues that it was at this moment that the history and nature of the Ivy League took a significant turn.

The admissions office at Harvard became much more interested in the details of an applicant’s personal life. Lowell told his admissions officers to elicit information about the “character” of candidates from “persons who know the applicants well,” and so the letter of reference became mandatory. Harvard started asking applicants to provide a photograph.

And to provide evidence that grades can be gamed, Freddie cites this:

Research by Michael Hurwitz and Jason Lee found that, from 1998 to 2016, the average high school GPA rose from 3.27 to 3.38. That may not sound like much, but distributed over millions of students, it’s a large increase. What’s more, the phenomenon is concentrated at the top.

That said, maybe Freddie is wrong. I was talking to researchers on Twitter who found that grades are superior to the SAT for predicting success in college, precisely because they measure more than the SAT, like the ability to ask for help, etc.

So, I don’t know if Freddie is right on this. I need to think more and try to put the pieces together.

Growth Mindset Roundup

From Marginal Revolution, “Growth Mindset Replicates!”:

In other words, a small, positive effect. But this small effect is coming from a small intervention, two online survey/interventions of 25 minutes each that could be easily scaled to the entire country or even worldwide. We have come a long way from the “mindset revolution” but who am I to discount a marginal revolution? Moreover, the average effect hides heterogeneity, the effect was bigger on the students who needed it most.

Some past opponents of mindset see this as the death of mindset. David Didau has this sort of take in “The nail in Growth Mindset’s coffin?”  He says the true parts of mindset are obvious, the false parts the result of magical thinking:

What this might suggest is that students who have previously underachieved improve when told that if they took more responsibility and worked harder they might do better, and that good behaviour makes a positive difference to any intervention. Neither of which are all that surprising.

This seems to me somewhat unfair. It’s good to have research to know the degree and extent to which obvious things can be shown to have an effect. And, as Didau accurately reported, there had been a number of failed replications in the past. This is decidedly not a failed replication.

There are two new papers out on mindset to keep track of. The first is a huge, carefully done experiment. The other is a huge, carefully down meta-analysis of previous studies.

I haven’t read either carefully yet, but I’ve found it interesting to follow researchers discussing the papers. My impression is that methodologically they hold up to scrutiny. Here are some sample tweets:

Re the meta-analysis:

I don’t entirely stand by this tweet any longer — things don’t feel so confusing now:

What are the educational implications of all this? I think that the claims about the power of mindset interventions to really have a huge impact on learning now have clearly been contradicted by our best research.

It’s hard to know how to talk about this. Smart teachers and educators always knew that these interventions couldn’t have a huge impact on kids, especially if the rest of the classroom pieces weren’t there. That said, not everyone is smart about this, and there was a time early in my teaching life when I believed the over-simplified story about mindset that I heard.

Jo Boaler and YouCubed are pretty clearly going farther than what evidence dictates. I wish they wouldn’t, and am powerless to stop them, and it makes me sad that they don’t seem to care. Defenses of them seem to come down to “well it’s not true but it’s a net-good message so spread it far and wide!” This is something that goes against all my instincts. I don’t do well with this sort of utility calculation.

Anyway, for an picture of exactly the sort of thing that the evidence does not support now (if it ever did) you can check out this video from Boaler and YouCubed, about how believing in yourself has been scientifically proven to change how your brain works and improve your achievement. Blech:

Likewise, I don’t see support for the sort of mindset interventions that are built into the first week of the New Visions math curriculum. There’s some good math in there, and maybe it’s good to talk about growth mindset during that math, I don’t know. It depends on how time-intensive the mindset stuff is, I think.

Where are we headed? Growth mindset is just going to be another high-level name for describing good teaching. It matters, but as a goal, or a value that connects a lot of disparate elements of teaching practice.

It feels like the mindset story is coming to a conclusion with these big, careful studies.

Where did working memory come from?

This is a quick note, before I forget the last couple things that I read.

Where did working memory come from? Here’s my picture so far:

  1. There’s a limit on how much random stuff you can remember at once. You don’t need science to know this; you just need random stuff that you have to remember. I assume this has been known forever.
  2.  People had different names for this. William James called it primary vs. secondary memory. Others were calling it long-term vs. short-term store. What was controversial was whether these constituted just two facets of the same memory system, or whether they were two totally independent memory systems.
  3. Evidence for independence comes largely from patients with brain damage. These patients either are amnesic, but do perfectly fine with short-term memories, or else they have greatly impaired short-term memories but otherwise function and learn OK. This suggests independence.
  4. Question: does short-term memory constitute a working memory for functioning and long-term memory? In other words, is short-term memory necessary for learning, reasoning and comprehension?
  5. Baddeley and Hitch do a battery of experiments to show that impairing short-term memory with verbal info does (modestly) impair learning, reasoning and comprehension. This is evidence that short-term memory does constitute a working memory system.
  6. But the thing is that performance was only modestly impaired by their experiments, so there must be more to the cognitive system than what their experiments uncovered. (Their experiments almost entirely used span tasks that ask people to remember a bunch of stuff, random numbers, letters, etc.)
  7. They then go beyond their experiments to make a guess about what the structure of working memory might be. They propose that there is a capacity to just passively take in verbal info, up to a point. Beyond that point, a “central executive” has to take active steps to hold on to info. Thus working memory limitations come both because the passive store has a limited capacity and also because the central executive can only do so much at once. The span experiments fill up the passive store, and force the active executive to do something. This allows task performance to go OK, but at a cost in performance. They also guess that there is a spacio-visual store that is entirely parallel to the phonological store.
  8. This takes us up to, like, 1974 or something.

 

I’m realizing I don’t really understand what working memory is

I’m trying to find some solid ground. Here are the questions I’m trying to get straight on:

  • Suppose someone didn’t believe in the existence of a separate short-term memory system, just as (apparently) people in the ’50s and ’60s were skeptical. How would you convince a skeptic?
  • What is the working memory system, anyway?
  • Say that you were a behaviorist, someone uncomfortable with talk of the cognitive. How would you make sense of the observational findings?
  • More concretely, what were the problems that Baddeley and Hitch were trying to solve when they introduced their working memory model?

I’m looking for foundations. Prompted by this, I went back to this, which brought me here, to Warrington and Shallice’s 1969 case reporting on a patient with severely impaired short-term memory.

The case is fascinating:

“K. F., a man aged 28, had a left parieto-occpital (“head” – MP) fracture in a motor-bicycle accident eleven years before, when a left parietal subdural (“brain” – MP) haematoma was evacuated. He was unconscious for ten weeks.”

He had lasting brain damage, especially when it came to language:

“His relatively poor language functions were reflected in his verbal I.Q. His ability to express himself was halting, and some word-finding difficulty and circumlocutions were noted.”

His short-term verbal memory in particular was damaged:

“The most striking feature of his performance was his almost total inability to repeat verbal stimuli. His digit span was two, and on repeated attempts at repeating two digits his performance would deteriorate, so that on some trials his digit span was one, or even none. His repetition difficulty was not restricted to digits; he had a similar difficulty in repeating letters, disconnected words and sentences. Single verbal items would be repeated correctly with the exception of polysyllabic words which were on occasion mispronounced.”

The thing that made him especially interesting was that, for a guy with significant short-term memory damage, there were a lot of things that he could do:

“Memory for day-to-day happenings was good and he had an adequate knowledge of recent and past events. Immediate memory for the Binet figures was accurate.”

Here is the real surprise, for people in the 1960s: his long-term learning was, actually, not bad. Consider the ten-word learning task, at which he performed admirably:

“A list of 10 high-frequency words was presented auditorily at the standard rate. Subjects were required to recall as many words as possible from the list immediately after presentation. This procedure was repeated until all the 10 words were recalled (not necessarily in the correct order). K. F. needed 7 trials. Twenty normal (“didn’t fall off a motorcycle” – MP) controls too an average of 9 trials, 4 of the subjects failing on the task after 20 trials. After an interval of two months he was able to recall 7 of these 10 words without relearning.”

On two other long-term memory tests, K. F. seemed to be performing normally as well.

And, what’s the significance of all this?

This was written when the existence of a short-term memory system was not universally accepted. (Is it universally accepted now? It feels like it but I don’t actually know.) And it’s useful to me for identifying what the core, foundational findings are that we have to grapple with in memory. There really aren’t very many, it seems.

At the core of things is the “digit span” task. This is the finding that there is some sort of limit on how many random things we can remember. This itself was the core finding that was supposed to support short-term memory. (“All subjects have a limited capacity to recall a series of digits or letters, and this limitation is regarded as a characteristic of the ‘short-term’ memory store.”)

The strongest evidence that this digit-span task was measuring a totally different system of memory was the evidence of amnesiacs, whose long-term memory is severely impaired:

“The question as to whether the organization of memory is a unitary process or a two-stage process has received much attention in recent years. The strongest evidence that there are separate short- and long-term memory systems is provided by the specific and isolated impairment of long-term memory in amnesic subjects.”

If you can have short-term memory but no long-term memories, and you can measure this with all sorts of repetition and digit span tasks, then there needs to be some distinction between two memory systems. Right?

Here, though, they found the opposite. A patient could have pretty normal long-term memory performance even though their short-term memory system was severely impaired.

In a different paper (1970) they lay out the implications of this for then-current controversies about short- and long-term memory:

“Most important, the results present difficulties for those theories in which STM and LTM are thought to use the same physical structures in different ways. (Because, I suppose, they’ve shown STM and LTM to be doubly independent of each other.-MP) They also indicate that the frequently used flow-diagrams in which information must enter STM before reaching LTM may be inappropriate. On this model, if the STM system were greatly impaired, one would expect impairment on LTM tasks, since the input to the LTM store would be reduced.”

What do they suggest?

“In light of these findings, it is suggested that a model in which the inputs to STM and LTM are in parallel rather than in series should be considered.”

One way of thinking of this could be that their patient, K. F., had damaged his ability to encode information about verbal sounds, but not his ability to encode the meanings behind those words, and that long-term memory is a system for storing meanings while short-term memory is just a system for storing sounds.

This is all fifty years ago, of course. But I think it’s helpful for me to understand what it took to get from a world where this seemed as plausible as the alternatives, to a world where scientific communities seem to universally accept that alternative.

That there’s a distinction between STM and LTM is beyond question. This is something that is confirmed a million-times over. Amnesiacs and people like K. F. speak to the distinctness of short- and long-term memory systems. We also experience this a million-times daily.

What is up for debate in the early 1970s is the relationship between these two systems. Is it one big system (“unitary”), with STM feeding directly into LTM? This study challenged that, suggesting that they were two fully independent memory systems.

Current models of memory suggest that they are connected, though in a more complex way than was understood before Baddeley and Hitch came along. (At least, I think that’s what’s going on…)

Baddeley, A. (1983). Working Memory. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 302(1110), 311-324.

I. 

This paper seems to be one of, like, a dozen mostly equivalent review pieces on working memory that Baddeley has written over the years. This one strikes a nice balance between concision and context, so it’s the one I chose to read with care.

Baddeley’s contribution is the idea that working memory consists of multiple independent components. When he was writing this paper, there were only three parts that he’d identified in short-term, working memory: a place for remembering sounds, a place for remembering space and visuals, and a “central executive” who is in charge of the whole system. This is contrast to earlier models that didn’t feel the need to distinguish between different components of working memory.

There are two lenses through which I’m reading this. First, how did we get here? Second, what metaphors do we use to describe memory?

Here is a handy table from a psych textbook, by the way:

Screenshot 2018-02-19 at 10.37.08 AM.png

II. 

Where did Baddeley come from?

Baddeley positions his work as a reaction to the “modal model” of memory, popular in the 1960s, one that is represented by (I’ve learned) Atkinson and Shiffrin.

Atkinson and Shiffrin were people; this is all I know about them.

The modal model, according to Baddeley, looked like this:

  • Short-term memory is one unit — no components. This is the working memory, the memory whose main function is to facilitate learning, reasoning, decision-making, etc.
  • Long-term memory is where you keep memories for the long-term
  • Here’s how long-term memories are formed: stuff automatically goes from short-term memory to long-term memory, but the process takes some time. (That’s why you don’t learn everything.) If you want to learn something, you have to make sure it’s rattling around short-term memory for long enough.

And what did things look like in the 1960s, when the modal model was prominent? I don’t have a clear picture of this. Wikipedia sends me to a piece from 1963 that talks of an explosion of results relating to short-term memory in the preceding years:

In 1958, and increasingly thereafter, the principal journals of human learning and performance have been flooded with experimental investigations of human short-term memory. This work has been characterized by strong theoretical interest, and sometimes strong statements, about the nature of memory, the characteristics of the memory trace, and the relations between short-term memory and the memory that results from multiple repetitions. The contrast with the preceding thirty years is striking. During those years most research on short-term memory was concerned with the memory span as a capacity variable, and no more…I venture to say that Broadbendt’s Perception and Communication (1958), with its emphasis on short-term memory as a major factor in human information-processing performance, played a key role in this development.

So the picture you get is that there was controversy about the distinction between short-term and long-term memory, springing from a great deal of results in the early ’60s.

One of the major impetuses? The famous psychological subject H.M. (Read about the controversies surrounding him in a fascinating New York Times magazine piece from a few years ago.) H.M makes an appearance in the Baddeley paper — he seemed to have the ability to form immediate memories (despite his profound inability to form long-term ones) help make the case for a distinction between short- and long-term memory systems.

And what was the state of things before the early ’60s? Baddeley credits Francis Galton with an early version (1883) of the notion that there are two separate memory systems, but I can’t figure out where exactly he says this or how he puts it. Wikipedia points us to William James, who distinguishes between primary and secondary memory. in the “Memory” chapter of The Principles of PsychologyI’ve only skimmed it, but I think he thinks of primary memory as a lot like the after-image of some visual perception. It just lingers for a second — real memory is secondary memory.

And what about before James and Galton? I’d have to figure that something as basic as the distinction between short- and long-term memory is not an insight unique to psychologists. I know a few other philosophers who make distinctions that seem relevant, but I’m not sure how to trace the lineage of short- and long-term memory before the late 1800s.

As it is, it seems that the early impetus is just the recognition that some stuff we can remember for a little bit, and other stuff we remember for a long time. Maybe this is as far as we get without more careful measurements.

III.

Back to Baddeley, who makes the case that the modal model of the early 1960s is wrong. There are multiple components to short-term, working memory.

What was wrong with the modal model?

  • Even when short-term memory is impaired, long-term memories can be formed just fine. So how could it be that the path to long-term passes through a unified short-term store? There has to be a pathway besides the damaged one that memories could pass-through, on their way to the land of long-term.
  • This is hilarious, but the BBC tried to use the modal model in their advertising to let people know about newer wavelength bands they’d be switching broadcasts to. They figured the more frequently a phrase is in short-term memory, the more likely it is for a long-term memory to be formed, so they just slipped the phrase into radio broadcasts here and there and…nobody remembered. So much for “saturation advertising.” So how are 
  • The third thing — that people remember recent stuff better, even in long-term memory — is confusing to me. I don’t get how it’s relevant to this yet.

So what does he suggest, to fix things?

First, that there is a central executive that is in charge of making decisions during activities that is unrelated to the memory stores. I think this is supposed to explain how people who have short-term memory impairment can still function or form long-term memories. The assumption is that as long as the central executive (who decides what the brain should do) decides to actively reinforce the stuff that makes it into short-term memory, long-term memory can happen. And the central executive can also be in charge of doing stuff accurately, even if the stores of memory traces are depleted.

The central executive is a bizarre notion. Metaphorically, it’s a little dude in your head that decides where to put attention, or when to actively reinforce the memory traces in the other stories of working memory — thus he’s also responsible for learning and reasoning. He is, to put it bluntly, your soul, an unanalyzable source of free will. It’s weird.

Second, Baddeley proposes two different passive stores of memories — the phonological and the visual/spacial. Each comes with an active element, something that can reinforce the memory traces.

The metaphors are fascinating here. The phonological loop is, metaphorically, a piece of audio tape that loops around your brain, over and over. When a sound goes into your mind it lands on the memory trace, and then an active recording element has to rewrite the sound on your mental tape for it to be sustained over time. Otherwise, it gets overwritten (or it fades?) as time goes on.

Baddeley’s model for visual/spacial stuff could have used an audio tape metaphor, as far as I can tell, but chose something that felt more appropriate for visual information — a scratchpad. So there’s an entirely parallel system that is posited for visual info, but with a totally different set of metaphors that is appropriate for visual stuff.

So there’s a scratchpad, and when visual or spacial stuff comes into your head it is inscribed onto the pad, very lightly. It’s only sustained if the active element reinscribes it on the pad.

IV.

Let’s end here, because I’m still massively confused as to how any of the results that Baddeley says are issues with the modal model are satisfied by creating two independent stores of working memory. I’ll need to read something else to make any progress on this, I think.

Though, if you know more about this, please let me know! Open invitation to educate me about what’s going on here.

Update: I’m going to read this next.

“That structure was just guess work, but we seem to have guessed well.”

This interview with Alan Baddeley, one of the researchers responsible for the contemporary notion of “working memory,” is fantastic. I’m not up to speed on all the psych concepts yet, but there are a lot of juicy bits.

First, you can’t get into research like this any longer:

When I graduated I went to the States for a year hoping, when I returned, to do research on partial reinforcement in rats. But when I came back the whole behaviourist enterprise was largely in ruins. The big controversy between Hull and Tolman had apparently been abandoned as a draw and everybody moved on to do something else. On return, I didn’t have a PhD place, and the only job I could get was as a hospital porter and later as a secondary modern school teacher – with no training whatsoever! Then a job cropped up at the Medical Research Council Applied Psychology Unit in Cambridge. They had a project funded by the Post Office on the design of postal codes and so I started doing research on memory.

On his “guess work”:

I think what we did was to move away from the idea of a limited short-term memory that was largely verbal to something that was much broader, and that was essentially concerned with helping us cope with cognitive problems. So we moved from a simple verbal store to a three component store that was run by an attentional executive and that was assisted by a visual spatial storage system and a verbal storage system. That structure was just guess work, but we seem to have guessed well because the three components are still in there 30 odd years later – although now with a fourth component, the ‘episodic buffer’.

What if he had guessed something different, and if that different guess had held up decently? Psychology is at a local maximum, but it seems to me that there’s very reason to think that our current conception of the mind is at anywhere near a global maximum, the most natural and useful way of conceiving of things.

On that:

The basic model is not too hard to understand, but potentially it’s expandable. I think that’s why it’s survived.

Here is the context for their description of working memory, as distinctive from short-term memory:

I suppose the model came reasonably quickly. Graham and I got a three-year grant to look at the relationship between long- and short-term memory just at a time when people were abandoning the study of short-term memory because the concept was running into problems. One of the problems was that patients who seemed to have a very impaired short-term memory, with a digit span of only one or two, nevertheless could have preserved long-term memory. The problem is that short-term memory was assumed to be a crucial stage in feeding long-term memory, so such patients ought to have been amnesic as well. They were not. Similarly, if short-term memory acted as a working memory, the patients ought to be virtually demented because of problems with complex cognition. In fact they were fine. One of them worked as a secretary, another a taxi driver and one of them ran a shop. They had very specific deficits that were inconsistent with the old idea that short-term memory simply feeds long-term memory. So what we decided to do was to split short-term memory into various components, proposing a verbal component, a visual spatial one, and clearly it needed some sort of attentional controller. We reckoned these three were the minimum needed.

In other words, short-term memory was assumed to be unitary. Baddeley and Hitch figured that it could have three independent components. Since short-term memory was running into trouble, it sounds like they kind of rebranded it as working memory.

Are there other differences between short-term memory and working memory besides its multi-component nature? I think so, but I’m currently fuzzy on this. Another thing that I’ve read suggests that short-term memory was seen as just on the pathway to long-term memory, so it was essentially part of a memory-forming pathway. Working memory is supposed to play a broader range of roles…I’m confused on this, honestly.

Finally, here’s a solid interaction:

Your model with Graham Hitch has a central executive controlling ‘slave’ systems. People sometimes have a problem with the term ‘slave’?
This is presumably because people don’t like the idea of slavery.