I’m realizing I don’t really understand what working memory is

I’m trying to find some solid ground. Here are the questions I’m trying to get straight on:

  • Suppose someone didn’t believe in the existence of a separate short-term memory system, just as (apparently) people in the ’50s and ’60s were skeptical. How would you convince a skeptic?
  • What is the working memory system, anyway?
  • Say that you were a behaviorist, someone uncomfortable with talk of the cognitive. How would you make sense of the observational findings?
  • More concretely, what were the problems that Baddeley and Hitch were trying to solve when they introduced their working memory model?

I’m looking for foundations. Prompted by this, I went back to this, which brought me here, to Warrington and Shallice’s 1969 case reporting on a patient with severely impaired short-term memory.

The case is fascinating:

“K. F., a man aged 28, had a left parieto-occpital (“head” – MP) fracture in a motor-bicycle accident eleven years before, when a left parietal subdural (“brain” – MP) haematoma was evacuated. He was unconscious for ten weeks.”

He had lasting brain damage, especially when it came to language:

“His relatively poor language functions were reflected in his verbal I.Q. His ability to express himself was halting, and some word-finding difficulty and circumlocutions were noted.”

His short-term verbal memory in particular was damaged:

“The most striking feature of his performance was his almost total inability to repeat verbal stimuli. His digit span was two, and on repeated attempts at repeating two digits his performance would deteriorate, so that on some trials his digit span was one, or even none. His repetition difficulty was not restricted to digits; he had a similar difficulty in repeating letters, disconnected words and sentences. Single verbal items would be repeated correctly with the exception of polysyllabic words which were on occasion mispronounced.”

The thing that made him especially interesting was that, for a guy with significant short-term memory damage, there were a lot of things that he could do:

“Memory for day-to-day happenings was good and he had an adequate knowledge of recent and past events. Immediate memory for the Binet figures was accurate.”

Here is the real surprise, for people in the 1960s: his long-term learning was, actually, not bad. Consider the ten-word learning task, at which he performed admirably:

“A list of 10 high-frequency words was presented auditorily at the standard rate. Subjects were required to recall as many words as possible from the list immediately after presentation. This procedure was repeated until all the 10 words were recalled (not necessarily in the correct order). K. F. needed 7 trials. Twenty normal (“didn’t fall off a motorcycle” – MP) controls too an average of 9 trials, 4 of the subjects failing on the task after 20 trials. After an interval of two months he was able to recall 7 of these 10 words without relearning.”

On two other long-term memory tests, K. F. seemed to be performing normally as well.

And, what’s the significance of all this?

This was written when the existence of a short-term memory system was not universally accepted. (Is it universally accepted now? It feels like it but I don’t actually know.) And it’s useful to me for identifying what the core, foundational findings are that we have to grapple with in memory. There really aren’t very many, it seems.

At the core of things is the “digit span” task. This is the finding that there is some sort of limit on how many random things we can remember. This itself was the core finding that was supposed to support short-term memory. (“All subjects have a limited capacity to recall a series of digits or letters, and this limitation is regarded as a characteristic of the ‘short-term’ memory store.”)

The strongest evidence that this digit-span task was measuring a totally different system of memory was the evidence of amnesiacs, whose long-term memory is severely impaired:

“The question as to whether the organization of memory is a unitary process or a two-stage process has received much attention in recent years. The strongest evidence that there are separate short- and long-term memory systems is provided by the specific and isolated impairment of long-term memory in amnesic subjects.”

If you can have short-term memory but no long-term memories, and you can measure this with all sorts of repetition and digit span tasks, then there needs to be some distinction between two memory systems. Right?

Here, though, they found the opposite. A patient could have pretty normal long-term memory performance even though their short-term memory system was severely impaired.

In a different paper (1970) they lay out the implications of this for then-current controversies about short- and long-term memory:

“Most important, the results present difficulties for those theories in which STM and LTM are thought to use the same physical structures in different ways. (Because, I suppose, they’ve shown STM and LTM to be doubly independent of each other.-MP) They also indicate that the frequently used flow-diagrams in which information must enter STM before reaching LTM may be inappropriate. On this model, if the STM system were greatly impaired, one would expect impairment on LTM tasks, since the input to the LTM store would be reduced.”

What do they suggest?

“In light of these findings, it is suggested that a model in which the inputs to STM and LTM are in parallel rather than in series should be considered.”

One way of thinking of this could be that their patient, K. F., had damaged his ability to encode information about verbal sounds, but not his ability to encode the meanings behind those words, and that long-term memory is a system for storing meanings while short-term memory is just a system for storing sounds.

This is all fifty years ago, of course. But I think it’s helpful for me to understand what it took to get from a world where this seemed as plausible as the alternatives, to a world where scientific communities seem to universally accept that alternative.

That there’s a distinction between STM and LTM is beyond question. This is something that is confirmed a million-times over. Amnesiacs and people like K. F. speak to the distinctness of short- and long-term memory systems. We also experience this a million-times daily.

What is up for debate in the early 1970s is the relationship between these two systems. Is it one big system (“unitary”), with STM feeding directly into LTM? This study challenged that, suggesting that they were two fully independent memory systems.

Current models of memory suggest that they are connected, though in a more complex way than was understood before Baddeley and Hitch came along. (At least, I think that’s what’s going on…)

Baddeley, A. (1983). Working Memory. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 302(1110), 311-324.

I. 

This paper seems to be one of, like, a dozen mostly equivalent review pieces on working memory that Baddeley has written over the years. This one strikes a nice balance between concision and context, so it’s the one I chose to read with care.

Baddeley’s contribution is the idea that working memory consists of multiple independent components. When he was writing this paper, there were only three parts that he’d identified in short-term, working memory: a place for remembering sounds, a place for remembering space and visuals, and a “central executive” who is in charge of the whole system. This is contrast to earlier models that didn’t feel the need to distinguish between different components of working memory.

There are two lenses through which I’m reading this. First, how did we get here? Second, what metaphors do we use to describe memory?

Here is a handy table from a psych textbook, by the way:

Screenshot 2018-02-19 at 10.37.08 AM.png

II. 

Where did Baddeley come from?

Baddeley positions his work as a reaction to the “modal model” of memory, popular in the 1960s, one that is represented by (I’ve learned) Atkinson and Shiffrin.

Atkinson and Shiffrin were people; this is all I know about them.

The modal model, according to Baddeley, looked like this:

  • Short-term memory is one unit — no components. This is the working memory, the memory whose main function is to facilitate learning, reasoning, decision-making, etc.
  • Long-term memory is where you keep memories for the long-term
  • Here’s how long-term memories are formed: stuff automatically goes from short-term memory to long-term memory, but the process takes some time. (That’s why you don’t learn everything.) If you want to learn something, you have to make sure it’s rattling around short-term memory for long enough.

And what did things look like in the 1960s, when the modal model was prominent? I don’t have a clear picture of this. Wikipedia sends me to a piece from 1963 that talks of an explosion of results relating to short-term memory in the preceding years:

In 1958, and increasingly thereafter, the principal journals of human learning and performance have been flooded with experimental investigations of human short-term memory. This work has been characterized by strong theoretical interest, and sometimes strong statements, about the nature of memory, the characteristics of the memory trace, and the relations between short-term memory and the memory that results from multiple repetitions. The contrast with the preceding thirty years is striking. During those years most research on short-term memory was concerned with the memory span as a capacity variable, and no more…I venture to say that Broadbendt’s Perception and Communication (1958), with its emphasis on short-term memory as a major factor in human information-processing performance, played a key role in this development.

So the picture you get is that there was controversy about the distinction between short-term and long-term memory, springing from a great deal of results in the early ’60s.

One of the major impetuses? The famous psychological subject H.M. (Read about the controversies surrounding him in a fascinating New York Times magazine piece from a few years ago.) H.M makes an appearance in the Baddeley paper — he seemed to have the ability to form immediate memories (despite his profound inability to form long-term ones) help make the case for a distinction between short- and long-term memory systems.

And what was the state of things before the early ’60s? Baddeley credits Francis Galton with an early version (1883) of the notion that there are two separate memory systems, but I can’t figure out where exactly he says this or how he puts it. Wikipedia points us to William James, who distinguishes between primary and secondary memory. in the “Memory” chapter of The Principles of PsychologyI’ve only skimmed it, but I think he thinks of primary memory as a lot like the after-image of some visual perception. It just lingers for a second — real memory is secondary memory.

And what about before James and Galton? I’d have to figure that something as basic as the distinction between short- and long-term memory is not an insight unique to psychologists. I know a few other philosophers who make distinctions that seem relevant, but I’m not sure how to trace the lineage of short- and long-term memory before the late 1800s.

As it is, it seems that the early impetus is just the recognition that some stuff we can remember for a little bit, and other stuff we remember for a long time. Maybe this is as far as we get without more careful measurements.

III.

Back to Baddeley, who makes the case that the modal model of the early 1960s is wrong. There are multiple components to short-term, working memory.

What was wrong with the modal model?

  • Even when short-term memory is impaired, long-term memories can be formed just fine. So how could it be that the path to long-term passes through a unified short-term store? There has to be a pathway besides the damaged one that memories could pass-through, on their way to the land of long-term.
  • This is hilarious, but the BBC tried to use the modal model in their advertising to let people know about newer wavelength bands they’d be switching broadcasts to. They figured the more frequently a phrase is in short-term memory, the more likely it is for a long-term memory to be formed, so they just slipped the phrase into radio broadcasts here and there and…nobody remembered. So much for “saturation advertising.” So how are 
  • The third thing — that people remember recent stuff better, even in long-term memory — is confusing to me. I don’t get how it’s relevant to this yet.

So what does he suggest, to fix things?

First, that there is a central executive that is in charge of making decisions during activities that is unrelated to the memory stores. I think this is supposed to explain how people who have short-term memory impairment can still function or form long-term memories. The assumption is that as long as the central executive (who decides what the brain should do) decides to actively reinforce the stuff that makes it into short-term memory, long-term memory can happen. And the central executive can also be in charge of doing stuff accurately, even if the stores of memory traces are depleted.

The central executive is a bizarre notion. Metaphorically, it’s a little dude in your head that decides where to put attention, or when to actively reinforce the memory traces in the other stories of working memory — thus he’s also responsible for learning and reasoning. He is, to put it bluntly, your soul, an unanalyzable source of free will. It’s weird.

Second, Baddeley proposes two different passive stores of memories — the phonological and the visual/spacial. Each comes with an active element, something that can reinforce the memory traces.

The metaphors are fascinating here. The phonological loop is, metaphorically, a piece of audio tape that loops around your brain, over and over. When a sound goes into your mind it lands on the memory trace, and then an active recording element has to rewrite the sound on your mental tape for it to be sustained over time. Otherwise, it gets overwritten (or it fades?) as time goes on.

Baddeley’s model for visual/spacial stuff could have used an audio tape metaphor, as far as I can tell, but chose something that felt more appropriate for visual information — a scratchpad. So there’s an entirely parallel system that is posited for visual info, but with a totally different set of metaphors that is appropriate for visual stuff.

So there’s a scratchpad, and when visual or spacial stuff comes into your head it is inscribed onto the pad, very lightly. It’s only sustained if the active element reinscribes it on the pad.

IV.

Let’s end here, because I’m still massively confused as to how any of the results that Baddeley says are issues with the modal model are satisfied by creating two independent stores of working memory. I’ll need to read something else to make any progress on this, I think.

Though, if you know more about this, please let me know! Open invitation to educate me about what’s going on here.

Update: I’m going to read this next.

“I was sitting there and my skin was burning and I said, this might make a great TV show one day.”

I watched the new Chris Rock special on Netflix. Watching all these classic comics with new specials feels a lot like that Modern Seinfeld feed — Chris Rock on Trump! — but it had some very funny moments. The personal stuff (about his divorce and failings) was interesting too, but I sort of felt it would have been more sincere to just do jokes about it.

Anyway, one thing led to another after I finished the special, and I present to you the clip above.

I also present to you the very, very funny and awkward clip below:

 

“That structure was just guess work, but we seem to have guessed well.”

This interview with Alan Baddeley, one of the researchers responsible for the contemporary notion of “working memory,” is fantastic. I’m not up to speed on all the psych concepts yet, but there are a lot of juicy bits.

First, you can’t get into research like this any longer:

When I graduated I went to the States for a year hoping, when I returned, to do research on partial reinforcement in rats. But when I came back the whole behaviourist enterprise was largely in ruins. The big controversy between Hull and Tolman had apparently been abandoned as a draw and everybody moved on to do something else. On return, I didn’t have a PhD place, and the only job I could get was as a hospital porter and later as a secondary modern school teacher – with no training whatsoever! Then a job cropped up at the Medical Research Council Applied Psychology Unit in Cambridge. They had a project funded by the Post Office on the design of postal codes and so I started doing research on memory.

On his “guess work”:

I think what we did was to move away from the idea of a limited short-term memory that was largely verbal to something that was much broader, and that was essentially concerned with helping us cope with cognitive problems. So we moved from a simple verbal store to a three component store that was run by an attentional executive and that was assisted by a visual spatial storage system and a verbal storage system. That structure was just guess work, but we seem to have guessed well because the three components are still in there 30 odd years later – although now with a fourth component, the ‘episodic buffer’.

What if he had guessed something different, and if that different guess had held up decently? Psychology is at a local maximum, but it seems to me that there’s very reason to think that our current conception of the mind is at anywhere near a global maximum, the most natural and useful way of conceiving of things.

On that:

The basic model is not too hard to understand, but potentially it’s expandable. I think that’s why it’s survived.

Here is the context for their description of working memory, as distinctive from short-term memory:

I suppose the model came reasonably quickly. Graham and I got a three-year grant to look at the relationship between long- and short-term memory just at a time when people were abandoning the study of short-term memory because the concept was running into problems. One of the problems was that patients who seemed to have a very impaired short-term memory, with a digit span of only one or two, nevertheless could have preserved long-term memory. The problem is that short-term memory was assumed to be a crucial stage in feeding long-term memory, so such patients ought to have been amnesic as well. They were not. Similarly, if short-term memory acted as a working memory, the patients ought to be virtually demented because of problems with complex cognition. In fact they were fine. One of them worked as a secretary, another a taxi driver and one of them ran a shop. They had very specific deficits that were inconsistent with the old idea that short-term memory simply feeds long-term memory. So what we decided to do was to split short-term memory into various components, proposing a verbal component, a visual spatial one, and clearly it needed some sort of attentional controller. We reckoned these three were the minimum needed.

In other words, short-term memory was assumed to be unitary. Baddeley and Hitch figured that it could have three independent components. Since short-term memory was running into trouble, it sounds like they kind of rebranded it as working memory.

Are there other differences between short-term memory and working memory besides its multi-component nature? I think so, but I’m currently fuzzy on this. Another thing that I’ve read suggests that short-term memory was seen as just on the pathway to long-term memory, so it was essentially part of a memory-forming pathway. Working memory is supposed to play a broader range of roles…I’m confused on this, honestly.

Finally, here’s a solid interaction:

Your model with Graham Hitch has a central executive controlling ‘slave’ systems. People sometimes have a problem with the term ‘slave’?
This is presumably because people don’t like the idea of slavery.

 

First instance of the term “Working Memory”

Screenshot 2018-02-16 at 1.12.42 PM.png

“When we have decided to execute some particular Plan, it is probably put into some special state or place where it can be remembered while it is being executed. Particularly if it is a transient, temporary kind of Plan that will be used today and never again, we need some special place to store it. The special place may be on a sheet of paper. Or (who knows?) it may be somewhere in the frontal love of the brain. Without committing ourselves to any specific machinery, therefore, we should like to speak of the memory we use for the execution of our Plans as a kind of quick-access, “working memory.”

From Plans and the Structure of Behavior (1960)

Some of the things I stand for in education, these days

When we critique an idea, we should critique the best version of it.

When we critique a pedagogical idea, we should critique the idea itself, not its misinterpretations. (Unless we’re saying the idea is easy to misinterpret.)

Most of the time when a pedagogical idea is critiqued, it’s critiqued for bad choices the teacher might make that have nothing to do with the idea itself. When we imagine an idea we don’t like, we imagine a classroom that we don’t like. That’s not fair, though.

Every pedagogical idea gets misinterpreted.

Nobody knows how to make ideas about good teaching scale, so we might as well talk about what good teaching actually looks like, without worrying about how the truth will or will not be misinterpreted at large.

You and I might teach in very similar ways but have wildly different ways to describe how we teach. The fiercest debates in education are also the vaguest. When you get down to classroom details, or even the tiniest bit of additional specificity, a lot of disagreement vanishes. It’s not that these debates don’t matter, it’s that they are highly theoretical.

OK maybe these debate don’t really matter.

Teachers have access to classroom details and specificity. One way that teachers can contribute to the knowledge base of education is to resist the urge to move to generalization, to spend some more time in the greater specificity that classroom life encourages. This is what teachers can uniquely contribute.

Non-teachers will often tend to think about scale. Teachers usually don’t, and this is something else that we can contribute.

Some people will tell you that being a jerk is important, as long as you’re being a jerk to the right people. Those people are jerks. Stupid jerks.

I think the core of what I do is trying to have a detailed understanding of how my students think about the stuff I’m teaching, and also of how they could think about the stuff I’m teaching. And then I’m trying to create as many opportunities for them to think about that stuff in the new way.

I care about my students’ feelings, a lot.

The way I see it, teaching is best understood through its dilemmas and tensions. This is not a new idea, but it’s one that hasn’t been sufficiently explored. And I’m very suspicious of people who claim to have resolved one of these tensions or dilemmas.

We shouldn’t ad hoc create rules for political discourse that only apply to people who we disagree with.

Debate is essentially performative. I mean, it doesn’t need to be, but it usually is.

Trying to understand someone and how they think is an almost complete improvement over debate in every way.

Lots of research is interesting. It’s especially fun to try to figure out how two different perspectives on teaching can fit together.

Teaching is a job, but it can be a great job if you like ideas and little humans.

Writing and reading should be fun and interesting. When writing instead aims to be useful, it’s usually not useful either. And 99% of writing about teaching is supposed to be useful.

At a certain point you have to decide if you’re trying, primarily, to change the world or to understand and describe it. It’s great that some people are trying to change the world, but I think inevitably those people end up having to be less-than-honest at times, for the sake of their projects or reputation. Personally, I prefer to understand and describe it when I write or think. (I’m not opposed to helping people, though!)

Blogging is dead, but it’s all we got for now, so let’s keep at it.

I like to think of my teaching life as a bubble, and I like to think of all other aspects of my life as little bubbles too. And a question I ask myself frequently is, can I grow these bubbles? Wouldn’t it be something if all the other bubbles could float and sort of merge into each other, turn into just one big bubble that encompasses everything? I feel like that would be nice, someday.

A New Study about Gender and Pay Gaps

I learned about this via Marginal Revolutions and Freakonomics. Briefly, Uber keeps an enormous amount of data on its drivers, allowing economists to study the different ways that women and men are paid. The Freakonomics folks interviewed an Uber economist:

LIST: So we have mounds and mounds of data. We have millions of drivers. We have millions of observations, and 25 million driver-weeks across 196 cities. So just the depth of the data and the understanding of both the compensation function and the production function of drivers gives us a chance to — once we observe if there is indeed a gap — gives us a chance to unpack what are the features that can explain that gap.

They did find a pay gap that broke down by gender — 7%:

LIST: We found something very surprising. What you find is that men make about 7 percent more per hour on average …

DIAMOND: … which is pretty substantial.

LIST: For doing the exact same job in a setting where work assignments are made by a gender-blind algorithm and pay structure’s tied directly to output and not negotiated.

Was it because of discrimination on an individual level? They don’t think so:

DUBNER: Right. So let me just make sure I’m clear. You’re saying there’s no discrimination on the Uber side, on the supply side, because the algorithm is gender-blind and the price is the price. And you’re saying there’s no discrimination on the passenger side. So does that mean that discrimination accounts for zero percent of whatever pay gap you find or don’t find between male and female Uber drivers?

LIST: That’s correct.

The interview is long and full of juicy details and tough questions. I haven’t read the whole thing carefully yet, but it’s fascinating all the way through:

DUBNER: What is the overall driver attrition rate? I don’t know whether it’s measured in six months or a year, or whatever.

DIAMOND: Yes, six months is what we’ve been looking at.

LIST: More than 60 percent of those who start driving are no longer active on the platform six months later.

DIAMOND: So the six-month attrition rate for the whole U.S. for men is about 63 percent, and for women it’s about 76 percent.

DUBNER: Wow. So that would connote to me, an amateur at least, that maybe this gender pay gap among Uber drivers is reflected in the fact that women leave it so much more. Maybe it’s just a job that on average, women really don’t like. Is that measurable?

There’s a whole discussion about that, and a lot  of other things too besides.

So what does explain the 7% pay gap, in the end? They have theories, foremost among them is that men drive faster than women:

LIST: That’s right. So after we account for experience now we’re left scratching our heads. So, we’re thinking, “Well, we’ve tried discrimination. We’ve done where, when. We’ve done experience. What possibly could it be?” What we notice in the data is that men are actually completing more trips per hour than women. So this is sort of a eureka moment.

DUBNER: They’re driving faster, aren’t they?

HALL: Yeah. So the third factor, which explains the remaining 50 percent of the gap, is speed.

It’s not hard to speculate about how something like how quickly men vs. women tend to drive to possible sources of systemic cultural discrimination. (Are men more confident drivers? Are they less fearful of the law? etc.)

Still, what these economists are finding is that (a) the pay gap is persistent, even in the face of an equalizing pay structure and (b) possible factors explaining the gap will not be simple to address. For example, one source of the gap could be that men tend to work more hours, gaining more experience which pays dividends later, so that one source of the Uber gap is that men are getting paid for experience, not something that you can easily address:

DIAMOND: I think this is showing that the gender pay gap is not likely to go away completely anytime soon. Unless somehow, things in our broader society really change, about how men and women are making choices about their broader lives, than just the labor market. But it’s not also a worry that the labor market is not functioning correctly. It makes sense to compensate people who are doing more productive work. It makes sense to pay people more if they work more hours. I mean, I don’t think those are things that we would ever consider thinking should be changed because that they’re a problem. Those are just real reasons that productivity can differ between men and women. And we should compensate people based on productivity.

What would the implications of this finding be? It’s not that individual discrimination isn’t responsible for the pay gap in general — this would likely depend on the field and the job, right? — but that there are deeper factors at play that might explain a pay gap between men and women.

This shouldn’t really surprise anyone working in education. Men and women teachers are paid according to the same standardized salary schedule in public schools. If you pooled all the male and female teachers, though, you’d see that there is a gender pay gap because of disproportionate numbers of male/female teachers in elementary vs. middle vs. high school. Men make more not because administrators choose to pay them more — largely, it’s because men choose to teach older kids more than women.

There’s no easy way to address this discrimination, if it’s even quite right to call this discrimination. Certainly it’s possible (likely? I don’t really know) that cultural factors partly explain the choices of men and women. At a certain level, though, this is irrelevant. Do we want to mess with the choices of men and women about where and how to work?

(This quickly gets tangled in questions of diversity and representation. Is it a policy priority to ensure that men and women are represented in the teaching force at numbers that are proportional to the students they instruct?)

There might be structural pay gaps that outstrip what can be explained by discrimination. Whether biological or culturally constructed, there might be persistent and not-bad differences between men and women. As I continue to think about this, it seems to make sense to me that we’d want to continue to battle discrimination while also thinking of ways to address inequities using other angles, besides battling pay gaps. Past a certain point, of course.

One last juicy morsel, at the very end of the interview. As Uber introduces tipping it seems that drivers make LESS and that the pay gap narrows somewhat because women are tipped more:

LIST: Yeah, I think when you look at the tipping data in general, you do find a tilt in favor of women compared to men in general. We’ll have a tipping paper for you in a few months. Because the economics of tipping is sort of wide open, and we’ll have a paper just like this one called something like “A Nationwide Experiment on Tipping.”

DUBNER: Right.

LIST: And we’ll do the tipping roll out and show you how earnings change with the introduction of tipping. And the earnings actually go down a little bit. They don’t go up after you introduce tipping.

DUBNER: Now how can that be?

LIST: What happens is the supply curve shifts out enough to compensate the higher tips. And when the supply curve shifts out, the organic wages go down. And what you have is drivers are underutilized. So what I mean by that is typically they’ll sit in their car empty 35 percent of the time. With tipping, maybe it’ll go up to 38 percent of the time.

DUBNER: In other words, the wage declines because more drivers think they’re going to make more money since tips are now included, but that increases the supply of drivers, which means there’s less demand to go around.

LIST: Exactly. That’s perfect.