Some books I loved in 2020

It feels like giving up to write a post like this with a week left to go in the calendar year. But what’s so bad about giving up? Anyway, I’m in the mood for comfort reading at the moment. That means I’m flipping between Kate Atkinson’s Life After Life, Rob Sheffield’s Dreaming the Beatles and Susanna Clarke’s The Ladies of Grace Adieu. No reason to put those on this list, though.

I’m typically a big subway reader which … well, you know. Even this fall, when I was back teaching in-person, I became an e-bike commuter unless the weather was truly awful. Now I’m on paternity leave for a few weeks, which demands its own sort of reading habits — morsels rather than feasts, is what works for me while caring for a baby.

OK, whatever, here are some books I loved reading in the order I read them.

The Power of the Dog, Don Winslow

Amazon.com: The Power of the Dog (9781400096930): Winslow, Don: Books

I think this review nails the situation with Winslow’s flawed but wonderful writing about America’s War on Drugs. “The book is trope-heavy, stereotype-heavy, occasionally (okay, often) one-dimensional,” which is true for all of Winslow’s writing. But why is it great? “Because it is a huge, meticulously researched book that comes at the end of a series 20 years in the making. Because it is a book that eschews flowery language for precision and quick action. Because the internal monologues of quiet men are often the ways we are given to understand their internal histories.”

As an aside, when my older son was born six years ago, I was reading The Jordan Rules by Sam Smith in the waiting room. This summer I was rereading Neil Gaiman’s Sandman when we had to rush into the hospital for my son’s birth. And three and a half years ago when my daughter was born I was reading Winslow’s The Cartel, a sequel to Power of the Dog, in the hospital.

Silas Marner, George Eliot

Silas Marner by George Eliot: 9780375757495 | PenguinRandomHouse.com: Books

I love moralizing fiction. What makes people bad? What makes them good? How do they get better (or worse)? I find it so easy to see myself as Silas Marner, emotionally wrapped up in myself and shut off from the concerns of others. In yeshiva we studied musar to reflect on our moral situation; Silas Marner is water drawn from the same well with a different vessel. Thanks to my wife for promising that I’d love this one.

Incidentally, this was the first book I was able to read to completion after lockdown back in March. Man, what a shitty year this has been in so many ways.

Ninth House, Leigh Bardugo

Ninth House (Alex Stern #1)

Do you hate Yale? I don’t, but maybe Leigh Bardugo does. Or maybe it’s just the moneyed elite that she hates, seeing the crimes she imagines them committing.

There are a lot of books that are “our world but magic” but this one is particularly audacious in the world it imagines. A friend who knows my enthusiasm for Stephen King lent this one to me. My advice? Make sure your friends know that you like Stephen King.

How To Tame a Fox (and Build a Dog), Lee Alan Dugatkin and Lyudmila Trut

How to Tame a Fox (and Build a Dog): Visionary Scientists and a Siberian  Tale of Jump-Started Evolution: Dugatkin, Lee Alan, Trut, Lyudmila:  9780226444185: Amazon.com: Books

I was so annoying while reading this book. I read long passages of it aloud to my wife. I couldn’t shut up about it to anyone I saw outside. “How are you guys holding up during these challenging times?” my friends would ask, and I’d reply with a long rant about all the physical traits that domesticated animals show.

OK, how do I get you to read this book? Put it like this: what does it mean to be domesticated? There is a real biological sense in which people are simply domesticated apes. But how does domestication happen? Who kicks off the domestication process?

This book tells the story of a group of Russian scientists who set out to study this question. Their experiment: to see if the famously undomesticated fox can be domesticated simply through selection for tameness towards humans. The experiment had to be carried out in secret thanks to the Soviet Union’s adoption of Lysenkoism, which held that natural selection to be a Western lie.

Biology is so cool. Science is wonderful. Keep it up, humans.

A Theory of Jerks and Other Philosophical Misadventures, Eric Schwitzgebel

A Theory of Jerks and Other Philosophical Misadventures by Eric  Schwitzgebel: 9780262539593 | PenguinRandomHouse.com: Books

Schwitzgebel has long been one of my favorite philosophy bloggers. His vision of philosophy is much broader than the as some grand debate between various -isms. He asks the best questions. Should we be surprised if ethicists are unethical? Is it OK to aim for moral mediocrity? What does it take for a machine to trigger our ethical impulses?

Each of the chapters is about blog-long, which feels like the perfect amount to bite off for each of these philosophical reflections. Just enough to read in one sitting and keep in the back of your mind for the rest of the day.

Let’s Talk About Love: A Journey to the End of Taste, Carl Wilson

The writing here is at times dense but it’s also lively and accessible. The question here is easy to state but hard to answer: if taste is subjective, what does it mean to have good taste?

More to the point, if Celine Dion sucks then why do so many people love her music?

I loved this book, but it’s not for everyone. You need to have a taste for theory and Wilson’s arguments are subtle and not straightforward. I think he comes out on the idea that artistic quality is always relative to a community. Within a community, there is something more or less objective that can be called “taste,” but that’s because the community has its own reference frame. Switch to another community and the rules of taste change. When a gap in taste between the critic and some other group shows up, that’s an opportunity to learn what makes that community tick such that they love the art that they love — really, a critic should do this.

This book is funny, big-hearted and big-brained. I should probably reread it in 2021.

A Series of Recently Published Famous Books that Everybody Likes that I Also Liked

Piranesi, Susanna Clarke

Station Eleven, Emily St. John Mandel

Circe, Madeline Miller

The Three-Body Problem, Cixin Liu

Exhalation, Ted Chiang

Station Eleven: Mandel, Emily St. John: 8601422213614: Amazon.com: Books
Madeline Miller - Circe
Amazon.com: The Three-Body Problem (9780765382030): Liu, Cixin, Liu, Ken:  Books
Exhalation: Stories: Chiang, Ted: 9781101947883: Amazon.com: Books

Some books that I didn’t love quite as much as the others, but still loved: The Great Secret, Jack Handey’s The Stench of Honolulu, George Bernard Shaw’s Man and Superman, Kitchen Confidential, Small Fry, Red White and Royal Blue, Very Short Intro to the American Revolution.

It’s impossible for me to review this objectively, but my friend Dr. Chavi Karkowsky wrote a truly remarkable book about her life as a doctor helping women with their high-risk pregnancies and deliveries.

High Risk, Chavi Eve Karkowsky MD

Even though it’s not really possible to be objective … the book is objectively great! And it’s not quite as terror-inducing as you might think. Chavi focuses on the systems — systems that save lives but also take on a life of their own and can feel at times oppressive for both doctors and patients both.

And here was my favorite more technical read of the year, one that I’m still in the middle of:

Ecology, Charles J. Krebs

Ecology: The Experimental Analysis of Distribution and Abundance | 6th  edition | Pearson

Like all textbooks this one costs 7 million dollars, but you can get the international edition used for far cheaper. (And if you are a determined googler … well, the internet contains all sorts of things, let’s leave it at that.)

This text has been a bit of a revelation to me. From start to finish, it’s mathematical. Are species more abundant when the species covers more territory? What happens when species compete with each other? How do populations grow?

The math is just accessible enough for me that I don’t need to break my head over it and can just enjoy the science. I’m looking forward to reading more of it and perhaps thinking about how I can bring some of it back for students.

Helping Students Make Good Mathematical Arguments

How do you teach someone how to make a good mathematical argument? Here’s my theory:

  1. Make sure they understand the stuff being argued about
  2. Show them what a good argument looks like
  3. Help them make good arguments on their own

I think I did a better job of this in this year’s geometry class that I have in the past, so I’ll share what I did.

First, I made sure that students understood what was being argued about. This is a plea for teachers not to teach mathematical concepts or skills through proof — a problem I see happening more at higher levels of math instruction. There’s simply too much that is new for students in a proof. It’s not the time to make sure students also understand every else.

In my class this year this happened both at the level of the course and that of the lesson. I started the unit with lots of time working with the sorts of diagrams that would feature later in proofs. But I also did this within a lesson. If I wanted to show students an example of a proof, I would make sure to launch the lesson with an activity that just involved analyzing that diagram.

Here is a diagram, I’d say. I just want you to tell me as many pairs congruent sides or angles in it as you can, and tell me how you know:

Then, after making a nice list of all the congruencies in the diagram, I’d preview the structure of the proof (thanks, Catrambone):

I would then model writing a proof along these lines using the congruencies the class found at the start of the lesson.

There’s a bit more that needs to happen at this stage to make sure students have a chance to really think about this. There are a few things that I’ve tried at this stage — asking students to turn and explain each step of the proof; removing a step from the proof and asking students to explain what it needs to be; asking everyone to take notes from memory with some or all of my proof erased. I’m still working on getting this stage of my little routine down.

Following this, I shared a new diagram and asked students to once again find congruencies. When I did this in class I used this new diagram and this is what students observed:

They came up with those questions as well. I asked the class to prove that the inner triangle was equilateral, using the structure that they had studied in that first proof.

So, the first stage is making sure students know what the proof is talking about. The above is how I’ve done a better job of that this year.

The last step is trickier than it may seem. Yes, it’s “obvious” that helping students make good argument is important for learning how to do exactly that. But the emphasis I’d put is on help and (especially) good. Because in my experience, there’s strong gravity that pulls students towards arguments that are intuitively good but not up to the rigorous standards of mathematical proof.

The issue is basically this: everyone knows a lot about shapes. In fact this is not an issue — this is fantastic. I love hearing people’s ideas about shapes. But if you don’t provide the right kind of help, you’ll confuse and anger your students. The reason is because your job, as a mathematics teacher, is to teach a very particular form of mathematical argument. And students will have many wonderful ideas about shapes all their own. In past years I fell into a tricky spot where my students were baffled why their own extremely reasonable arguments for why a pair of triangles must be the same size and shape aren’t acceptable as proofs.

In case you haven’t heard these things live and in person, the human eye is attracted to symmetry and change. If you ask a group of people why a pair of lengths are the same, they will be drawn to arguments that incorporate those sorts of features. “Those lengths have to be the same because the diagram is symmetric. If it weren’t symmetric then when you moved that angle over there, that other angle would have to change also. And then the whole diagram would be broken.”

What’s the issue here? Nothing, really. It’s beautiful. It’s just not rigorous enough to pass muster in a geometry class. And while some people might be tempted to push on students — how do you KNOW that it’s symmetrical? how do you KNOW that angle will change? — my experience is that this only angers teens, who grow disgruntled that their own reasoning has been invited to the party (finally!) and then rejected as unacceptable.

Far better, I now believe, to help students make GOOD arguments, where “GOOD” is defined as “whatever is acceptable from the point of view of the course.”

In practice, this means that I have started to include a lot of visual choices for students as they are asked to form justifications. Here is a snap shot from a problem set I assigned in class:

There are now choices. Your argument will look like one of these four choices — always. Learn to use them, and they’ll become part of your language of justification. (Thanks, self-explanation literature.)

In short, students’ choices are constrained. To put it one way, especially in the early stages of learning, students are allowed to choose a wrong argument but they are not allowed to make a bad argument. (Again by bad I don’t mean “bad,” just “not ideal within the context of this course.”)

I think I’ll be working on improving my teaching of proof until the day they drag me feet first out of a classroom, but these two ideas feel like progress to me.

Two Kinds of Creative Lives

I always enjoy Austin Kleon’s writing, and this morning’s blog post on his morning routine was no exception. He’s writing about an artist’s routine for sparking daily creativity:

My method is cribbed from The Sedaris Method: write things down all day in a pocket notebook, then wake up the next morning, fill out my logbook, and then write longhand about yesterday.

When I don’t know what to write about I answer “The Best Thing” prompt or draw until I feel like writing.  (This morning I wrote about banana bread and palm trees.)

While Kleon is talking about being a writer and an artist, you might extend his recipes and routines for any sort of creative work — that could include mathematical research, making sense of a dense research literature, or problem solving. And it could even encompass learning, which I’ve long thought of as a creative act all in itself.

What are the ingredients to this sort of creative life?

  • Regularity
  • Routine
  • Dedicated time to producing something at the start of the day

… and so on.

In the world of creativity advice any excuses tend to be dismissed as just that — excuses. I have no doubt at all that this is a recipe for a happier creative life. But let’s be real for a moment: you can either have a full-time job, be an engaged parent, or spend an hour every morning with an open notebook, but I don’t see how you can do all three.

Of course, the life of the full-time writer and artist is not without its own potential distractions. I don’t mean to minimize that. And I also have no doubt that if you want to make the most of your creativity, the full-time artist is the way to go. If you want to get good at anything it takes time and sacrifice. It may not be financially comfortable.

But there’s another kind of creative life out there that many of us live on the margins of our family and work life. Art or learning never gets to take priority in this life. So what sorts of habits and routines support a creative life that’s spent between the cracks of work and family?

I don’t know. I mean, I have the dumb advice that I could give a younger version of myself. (Write on the subway. The family never owes you time for art. Leave things behind. Let yourself get obsessed and bored. Yes, keep a journal. Take the long view. Skip a day. Skip a week. Skip a month. Skip a year.) But my point is not that I have any advice on how to live this sort of life — it’s just that full-time artists are in no position to offer it.

I’d like to hear more about creativity from the part-time poets, the bus-ride novelists, the teacher painters, the subway mathematicians and Saturday night musicians. Because when it comes to life and creativity, that’s most of us.

Yet Another Thing that Humans and Viruses Have in Common

When an epidemic rages through a population, at first it faces no immunity at all. The disease constantly encounters fresh meat. There is nothing that can stop it.

Eventually, the population gains some immunity, yada yada yada, the disease doesn’t spread as easily and the epidemic slows down. There is a point when the disease, on average, infects just one more person per infected person. You have heard this before. It is the “herd immunity threshold.”

What is more surprising is that even once the population has hit this immunity threshold, the epidemic continues to grow — for a time. Epidemics have a kind of momentum that pushes infections even past this threshold. If you know about this already, you probably learned about it the way I did: from reading experts discuss the COVID-19 pandemic.

I immediately liked the “momentum” but found myself having a difficult time thinking precisely about it. Then, while reading about mathematical ecology the other day, I learned something that helped it all snap into place for me: this happens with people too.

The notion of population momentum makes a lot more sense to me in a human case. Probably if I was a virus the epidemic case would be easier, but I am what I am. Wikipedia has a great exposition of it, including this handy chart:

In the first generation, the fertility rate is 4 and the 200 fertile people give birth to 400 children — some pretty robust population growth, given the age distribution of the population. Then, at time = 1, the fertility rate drops and parents have only two kids each, merely replacing the fertile population as the old population ages out (dies). Even though the fertility rate has dropped, there are still the result of the previous fertility boom at t = 0. Those 400 children are going to have two children each, and that’s going to help the population grow for a bit longer. Soon enough, though, the fertile population will just be replacing itself.

This phenomenon was first described by Nathan Keyfitz in 1971. He directs the idea to policymakers who are reluctant to offer contraception for fear that their countries will stop increasing in population. “In some countries hesitation in making contraception available is rationalized by the view that the country is not yet “full,” he writes. “Concern that total numbers will taper off prematurely is misplaced.” He goes on to explain how to calculate the total “ultimate” population once fertility reaches replacement levels.

It’s this exact same phenomenon that governs the growth of a virus, even after (say) a vaccine is introduced that brings the rate of infection down to 1. I find it interesting that some of the same population dynamics govern both humans and viruses. It suggests to me that a path towards better educating others about epidemic dynamics would be to start with human stuff.

The Generalized Logistic Function and Pandemic Modeling

The logistic function was invented by Pierre Verhulst to represent exponential growth that levels off. To do this he chose the simplest thing he could think of: each additional “birth” knocks down the growth rate by an equal amount.

Exponential growth:  \frac{dP}{dt} = rP

Logistic growth:  \frac{dP}{dt} = rP(1-\frac{P}{K})

But what if the leveling off happens faster? Or slower? What if population growth really slows down after the first few generations? What if it only levels off when environmental resources really get strained?

OK, no problem: we can use basically any formula to moderate this exponential growth. The “generalized logistic function” adds a power to the logistic function that’s basically a shrug of the shoulders and a degree of flexibility. “Go ahead,” it says, “do whatever you want. As long as it starts exponential, approaches the carrying capacity, and is shaped like a nice ‘S’.”

Generalized Logistic growth: \large \frac{dP}{dt} = rP (1 - (\frac{P}{K})^n)

This function can tolerate a bit of funkiness, compared to the vanilla logistic. Note how with these parameters there’s a bit of a weird asymmetry as it approaches the carrying capacity.

https://www.desmos.com/calculator/stk54ocp5e

This generalized logistic function might better fit some S-shaped data. It adds another parameter, which is to say it introduces another degree of freedom. This amounts to wiggle room for researchers who use it. But without any particular reason to think the function should be one way or the other, it all amounts to guessing.

It’s this sort of guessing that can sometimes get you in trouble as a mathematical modeler.

Early on in the COVID-19 pandemic there was a clear need for information about the virus. What sort of spread should governments expect? Were hospitals at risk of overcrowding? How much death might the world be facing?

In the confusing weeks of March and April, one group of mathematical modelers filled the void and gained prominence above all others: IHME. They created a clear website that made predictions as to when the United States would experience shortages of hospital beds and ventilators.

But before long, the predictions of IHME came under fire from the epidemiological community. The headlines didn’t pull any punches: “Influential Covid-19 model uses flawed methods and shouldn’t guide U.S. policies, critics say.”

What were the issues? The article names a few. But the fundamental problem was this: IHME was just fitting data to curves, and any curve would do.

At the start of the pandemic, the IHME group was using a bit of software called “CurveFit” to make predictions. As you might guess, the software tries to find functions that best fit the given data. So far so good! The IHME group began with the generalized logistic function and looked for the best generalized logistic for existing COVID data.

Then, their plans changed. “We first tried building the analysis using the sigmoidal function,” they write, “we then discovered that the ERF error function provided a better fit to the data.”

Did the ERT error function fit the existing data better? I’m sure it did. But what does this function have to do with population growth? Absolutely nothing, is the answer. It’s just a shape. And for the researchers, that’s what the generalized logistic function was as well — just a shape, not an attempt to capture the underlying dynamics of a new viral epidemic. More from the April article:
“[IHME] doesn’t even try to model the transmission of disease, or the incubation period, or other features of Covid-19 … It doesn’t try to account for how many infected people interact with how many others, how many additional cases each earlier case causes, or other facts of disease transmission that have been the foundation of epidemiology models for decades.”
The message here for mathematical modelers seems to be that there is more to modeling than fitting the data. Unless we have some inkling of why a model should have the shape that it does, we should have absolutely no confidence that a fit is anything but a fluke. Then again, there’s another message here too: modeling is hard. The last paragraph of that article praises more conventional epidemiological models for their more sensible predictions:
“A different, data-driven model from researchers at the University of Washington predicts “about 1 million cases in the U.S. by the end of the epidemic, around the first week in June, with new cases peaking in mid-April,” said UW applied mathematician Ka-Kit Tung, who led the work.”
By early June about 14 million people had been infected with the novel coronavirus. IHME is hardly the only modeling group whose predictions veered wildly from reality, and it’s hard to blame anyone for that. It’s never going to be easy to make predictions about unprecedented times.

How To Tame a Function

If you grew up around animals, you probably know a bit about their reproductive cycles. Seeing as I did not, I have been slow to learn what I know about the Birds and the Bees of the birds and the bees. In the absence of really any first-hand contact with animal life, I have had to resort to books for my basic education in animal reproduction.

But what I’ve learned has deepened my understanding not just of the nitty-gritty biology facts but of the fairly abstract mathematics of chaos.

Here’s a bit of biology I didn’t know: the rules of mating for domesticated animals like dogs and cattle are different than they are for their wild cousins. The big difference is the timing. Wild animals often have a narrow reproductive window during a certain time of the year. But the biological changes that come along with tameness somehow also bend the rules of pairing off.

Here is how it’s described in the thrilling “How to Tame a Fox (and Build a Dog)”:

“All wild animals breed within a particular window of time each year, and only once a year. For some, that window is as narrow as a few days and for others it’s weeks or even months. Wolves, for example, breed between January and March. The window for foxes is from January to late February. This time of year corresponds to the optimal conditions for survival; the young are born when the temperature, the amount of light, and the abundance of food offer them the best odds for a successful launch into the world. With many domesticated species, by contract, mating can occur any time during the year and for many, more than once.”

This is of also course true of domesticated apes, i.e. us, the human people.

Imagine an experiment in population growth. We take a small group of wild animals to a protected island. These wild animals have abundant food and no predators on the island. A pretty sweet deal, all said. We set these wild beasts loose to eat, drink and … you know, have fun.

Well, they do have fun, and the population grows. But this is a wild species with an extremely narrow mating season. They can only reproduce once a year. But when they do, they give birth to big broods. This is a “nonoverlapping generation,” and its mathematics happens in nice, even steps. We can calculate the size of each generation one step at a time:

Pop_{n+1} = Pop_{n} \times r

But as Thomas Malthus pointed out way back at the turn of the nineteenth century, a good thing can’t last. If the population grows like this, it will quickly use up all of the resources in this island paradise. In which case, the population will be unable to continue to grow.

One of the first people to put Malthus’ ideas into math was Pierre Verhulst. He described “logistic growth” (intended to echo with unrestricted “logarithmic growth”) as a simple (if arbitrary) way to slow down overpopulation. The key is that the environment can only handle so much of a species — its “carrying capacity,” K — and each additional individual in the population slows the population down by an equal amount.

Pop_{n+1} = Pop_{n} \times r \times (1 - \frac{Pop_{n}}{K})

(Ben Orlin has a very clear exposition of the logistic in “Change Is The Only Constant.”)

You may be familiar with the logistic’s famous S-shape:

However, take care! The S-shape curve is not “wild” logistic growth, which happens in strictly nonoverlapping discrete generations. No, the S-curve is the tame, domesticated growth of Golden Retrievers, Angus Cattle, and American People who reproduce with more flexibility. You could even say, in the case of humans and other such species, that their populations are continuously increasing.

Ah, no worries though. Discrete functions are just like continuous functions, minus the continuity. They’re the dots without the lines. They aren’t meaningfully different, are they?

To be sure, sometimes the discrete function makes a nice smooth S, gliding into the carrying capacity without much fuss:

Thank you to the Desmos user who had the patience to write all those compositions of functions. https://www.desmos.com/calculator/rrrqivja2w

Then again, sometimes our island of wild animals with nonoverlapping generations ends up bouncing around the carrying capacity, each year their population crashing or rising above that set parameter:

It all depends on that growth rate. For some values of the growth rate, this oscillation actually converges on that carrying capacity, resulting in nice agreement between the discrete and continuous cases:

Ah, but pump that rate of increase up high enough and you get the real fun, which is utter chaos:

 

The population almost goes extinct for a minute there, but then experiences a nice bout of exponential growth. That’s chaos for you.

Animal species themselves can be wild or domesticated. It turns out that functions can be domesticated as well. In this case, it’s the continuous version of this function that is the tamer, better behaved variety.

There are indeed animals that are best modeled by the continuous function. There was a time when some people were very gung-ho about the logistic. In the 1920s Raymond Pearl declared the continuous, nice S-shaped logistic to be the “true law of population growth.”

It’s not. Robert May pointed out in the 1970s that the discrete case, though extremely simply, exhibits a huge range of behavior. “Their rich dynamical structure,” May writes, “and in particular the regime of apparent chaos wherein cycles of essentially arbitrary period are possible, is a fact of considerable mathematical and ecological interest, which deserves to be more widely appreciated.” The wildness of the discrete function may be a better fit for species that have nonoverlapping generations — many insect populations give birth in strict, discrete steps, like cicadas that emerge only ever thirteen years.

Chaos is in some ways a very abstract phenomenon, but learning more about ecology and population growth has made it very real for me. I’d seen the logistic map many times in the past, but I don’t think I’ve ever quite understood it until I connected it to its origins in animal populations. I’m left with a lesson for my teaching: there are some mathematical ideas that just work better when you learn them in their biological, natural setting.

Sources:

Steven Strogatz, “Nonlinear dynamics and Chaos”

Charles J. Krebs, “Ecology: The Experimental Analysis of Distribution and Abundance”

Piranesi, Escher and Imaginary Prisons

Quoting wikipedia:

The Prisons (Carceri d’invenzione or Imaginary Prisons) is a series of 16 prints by the Italian artist Giovanni Battista Piranesi in the 18th century. They depict enormous subterranean vaults with stairs and mighty machines.

Continued:

In the second publishing, some of the illustrations appear to have been edited to contain (likely deliberate) impossible geometries … Piranesi’s dark and seemingly endless staircases and blocked passages prefigure M. C. Escher‘s images with endless stairs such as his 1960 lithograph Ascending and Descending

m-c-escher-ascending-and-descending

“Piranesi” is the title of Susanna Clarke’s wonderful new book. The novel is populated with mathematicians and scientists — but I don’t want to give too much away. It’s both charming and thrilling, and like these drawings poses questions about prisons and imprisonment that aren’t easy to answer.

Polling a Couple Hundred Teachers on Twitter about Pandemic Teaching

I’m not really sure what it says about me that I enjoy designing and analyzing survey data. Other people exercise or write poetry. I tweet polls. Honestly, I’m starting to think that I might be a bit of a nerd. Going to want to keep an eye on that.

Back in July, when things weren’t necessarily better but were somewhat different, schools were still coming up with plans for the fall. Would it be live, in-person? Would some students be live while others learned online? These options were hotly debated, and I asked teachers on Twitter what they thought of the options.

Here were the responses:

Image

Elementary teachers seemed split between all-live and all-remote, whereas middle and high school teachers showed a distinct preference for all-remote.

What does this mean? Teachers have three things to worry about this year: safety, working conditions for teachers and conditions for students.

Elementary teachers may or may not have been aware of research suggesting that younger children get the virus less often — probably their innate immune systems do a better job fighting it off. As an elementary teacher myself, I can attest that learning conditions are far, far worse for younger children who are logged into online learning. They seem confused by and frustrated with technology. So student conditions are worse. And as I hear from friends and see in my child’s own school, being a non-specialist classroom teacher is really really really hard this year for K-5. Teachers are often getting fewer breaks and responsible for making sure kids go to the bathroom in the right way and wash their hands and wear their masks and so on and so on — working conditions are worse.

This all could explain why elementary teachers showed a preference for all-live versus hybrid or all-remote.

In middle and high school, teachers seemed fairly enthusiastic about remote learning. That might have been primarily about safety concerns. Still, I’d heard from many people that, from the perspective of learning conditions, all-remote would be better for learning than hybrid. And it would be better for working conditions as well — wouldn’t you like to work from home? Wouldn’t you prefer to just focus on making remote learning work well, rather than dealing with yet another confusing new teaching mode?

That could explain why middle and high school teachers showed a preference for all-remote teaching.

A few months into this thing, how is it going? Here are a pair of polls to think about.

Poll #1 (those are percentages):

Poll #2 (also percentages):

So, what do we make of this?

Fully remote teaching, at the end of the day, are worse working conditions for teachers. It’s isolating, and you don’t get to see your students or colleagues. From the perspective of optimizing student learning it may very well be true that purely remote learning is more effective. But it’s less meaningful for teachers and for most kids too in every other sense.

That doesn’t necessarily mean that teachers have changed their minds from that first survey. Maybe safety considerations override other concerns, and teachers still prefer to work fully remotely even if it’s not working any better.

Or maybe teachers were overly optimistic about what they could accomplish through remote learning this autumn. Maybe the relationships have really suffered. Conversely, maybe the relationships and social life inside a school has been better than teachers expected in their hybrid situations.

From conversations online and in person, I know that there are a lot of ways for schools to mess this up. Lots of teachers are having a hard time at work this year. I certainly spend a lot of time at my wits end.

Still, it’s a real privilege to get to go back to school and do the thing that I know how to do. My school has been hybrid and, so far, it’s been good.

Make a collection of worked examples for your classes

I’m filing away worked examples for my classes into a Google Slides presentation that I update daily. I’ve posted the presentation at the top of my Google Classroom page, the idea being this is a direct way for students to find examples. I don’t know if the kids will actually refer back to it, but I like collecting the examples now that my teaching leans so heavily on digital resources.

Here some of the examples I’m fond of:

A Perfect Problem: Difference of Squares

This is a long post. The first half was written by Benjamin Dickman, who shared the problem with me. The second half was written by me, Michael Pershan. Enjoy these two different maps of the same mathematical terrain.

Part One: Benjamin’s Writeup

Part Two: Michael’s Writeup

[BD]

Let us use the word ‘problem’ to refer to a question for which the method of solution is unknown at the outset. Problems can be viewed on a continuum that ranges from “trivial” to “intractable,” but locating a problem is a function of the individual or group trying to solve it and the resources to which they have access. There is, of course, no single perfect problem for all; but, here is a proposal for one of many perfect problems for some.

There is a wonderful problem for secondary school students looking to be stretched that asks which numbers can be written as the difference of two squares; here, we are roughly in the area of mathematics referred to as “number theory,” and are speaking of numbers that are non-negative integers. It is deeply unfortunate that number theory is not an area of mathematics taught more in our secondary schools across the world; as a result, there are many examples of number theoretical problems that can be effectively posed for students, but for which the techniques or strategies involved are not yet familiar.

Here is a proposed method – in other words, a spoiler – for characterizing the numbers that can be written as a difference of squares: Observe a2 – b2 = (a-b)(a+b); if a and b have the same parity, then the factors a-b and a+b are both even, which means their product is a multiple of 4. If a and b are of different parity, then the factors a-b and a+b are both odd, which means their product is odd. So, a necessary criterion for a number to be expressed as the difference of squares is that it is either a multiple of 4 or odd; it turns out that this is a sufficient criterion, too, as established by the following two identities:

4n = (n+1)2 – (n-1)2 and 2n+1 = (n+1)2 – n2

Uncovering this result is already an opportunity for exciting mathematical exploration with students; their thinking is sure to proceed in a manner much less linear than the description above.

Whenever a problem is solved, there are several options around how to move forward. Here are three such possibilities:

  1. Abandon the problem and move on to an unrelated one;
  2. Try to derive the solution in a new way; or
  3. Try to solve a related problem that is a bit more difficult.

Rich mathematics may (or may not) be uncovered through any of the aforementioned choices, but we will focus here on the third option. (There are slightly different ways of deriving the solution above that will be almost certainly unfamiliar to students; for example, we can observe that a number squared is always 0 or 1 modulo 4; so, the difference of two squares is necessarily odd or a multiple of 4, and this sort of phrasing allows one to bring in topics that are otherwise unseen in K-12 school mathematics: modular arithmetic, quadratic residues, and so forth.)

Our related problem is as follows: Given a nonnegative integer n, in how many ways can it be written as a difference of squares?

This can be viewed as a direct generalization of the earlier problem: if we know how to tell when the answer is “zero ways,” then we know which numbers cannot be written as a difference of squares.

One of the classical strategies for mathematical problem posing is to start with small cases; however, it is often presumed that “small” refers to magnitude (absolute value) and that the ordering from smaller to bigger proceeds additively. That is, one might try looking at 0, then 1, then 2, then 3, etc. But, for a problem whose underlying structure is multiplicative – for example, a problem that might be more easily expressed in the language of factors or factorizations – this additive procession can obfuscate important patterns.

For our problem, we know that the only numbers that can be expressed as a difference of squares are odd numbers and multiples of 4; so, let us begin by investigating the former and see where it leads.

If we have an odd number expressed as a2 – b2, then we also have a factor pair for that number: a-b and a+b. Indeed, any factor pair for an odd number can be written in this manner, for two factors of an odd number must both be odd, and this will mean that their average is a whole number, from which we can adjust up and down by the same amount to recover a representation in the a-b and a+b form. Specifically, we use a as the factor pair’s average and adjust by b. This all becomes more clear by way of example.

Consider the odd number 15, which has factor pairs (1, 15) and (3, 5). For the first factor pair, we find the average of 1 and 15 to be 8, and note that 1 = 8-7 and 15 = 8+7. As a result, we can express 15 as 1(15) = (8-7)(8+7) = 82 – 72. Similarly, we can look to the second factor pair and find the average of 3 and 5 to be 4, and note that 3 = 4-1 and 5 = 4+1. As a result, we can express 15 as 3(5) = (4-1)(4+1) = 42 – 12.

We have now established a matching between two representations of 15: the first representation is as a specific difference of squares, and the second representation is as a specific factor pair. The number of factor pairs is usually equal to half the number of factors; the only exception is if the number of factors is odd, which occurs precisely when we are dealing with a perfect square. As we deal with nonnegative (rather than positive) integers, let us establish in that setting that we will continue to use the number of factor pairs as the number of ways to express our number as a difference of squares; so, we will take the total number of factors, add 1, then divide by 2 for our result. Again, let us clarify by way of example.

Consider the odd square 9, which has factor pairs (1, 9) and (3, 3). As in the example with 15, we use these factor pairs to produce the following representations of 9 as a difference of squares: 9 = (5-4)(5+4) = 52 – 42 and 9 = (3-0)(3+0) = 32 – 02. Note that, if we were to adhere to positive integers only, we would not be able to use zero in our latter representation. The result of this is still that the number of representations, 2, is equal to our number of factor pairs; but, the number of factors is 3, so it is not quite right to say we have 3/2 representations. Instead, we add 1 to the number of factors, thereby double-counting the factor 3, which gives us 4 total factors (counted with multiplicity among factor pairs) and halving this gives us the desired result: 2 ways to represent the odd square 9 as a difference of squares.

As we segue to multiples of 4, we find that matters are slightly more complicated. For an illustrative example, consider that of 8: its factor pairs are (1, 8) and (2, 4). The latter factor pair generates a difference of squares: 2(4) = (3-1)(3+1) = 32 – 12. Unfortunately, matters go somewhat awry with the former factor pair: the average of 1 and 8 is 4.5; it is true, numerically, that 1(8) = (4.5-3.5)(4.5+3.5) = 4.52 – 3.52; however, we have decided only to use nonnegative integers, which means that this difference of squares is inadmissible for our present purpose.

The issue at hand for the above-described example is that 1 and 8 have different parity; as a result, their average is a non-integer. To resolve this, we need to ensure that every factor pair for the multiples of 4 has two factors with the same parity. As the product is even, this means, in particular, that each of the factors needs to be even; so, we propose the following resolution: Given a number n = 4m, factor out the 4, which is equal to 22, and consider all of the factor pairs for m. Next, we modify every factor pair by multiplying each factor by 2; as we double each of the factors, we end up with 4m as the product, which is equal to our starting number of n. Let us illustrate matters again by way of example.

Consider 60, which is an even multiple of 4. Let us now factor out a 4, which leaves us with the number 15 to consider. We saw earlier that 15 has factor pairs (1, 15) and (3, 5). We can now modify these pairs by doubling the factors in each to yield (2, 30) and (6, 10); these now give us all of the factor pairs with the same parity for 60, which means we can express 60 as a difference of squares using them: 2 and 30 have an average of 16, which leads to the representation 2(30) = (16-14)(16+14) = 162 – 142; similarly, 6 and 10 have an average of 8, which leads to the representation 6(10) = (8-2)(8+2) = 82 – 22.

The result of this line of thinking is that when n is a multiple of 4, the number of representations of n as a difference of squares is the number of factor pairs for n/4; as was the case for the odds, the number of factor pairs is usually half the number of factors, but in the case of a perfect square we would need to add 1 to the number of factors to count them with multiplicity among the various factor pairs. One more example should do the trick in clarifying this matter.

Consider 36, which is an even multiple of 4 and a perfect square. We can divide it by 4 to get 9, which we saw earlier yields the factor pairs (1, 9) and (3, 3). Multiplying each factor by 2, we arrive at (2, 18) and (6, 6). Respectively, these yield 102 – 82 and 62 – 02 as the two ways in which 36 can be represented as a difference of squares. Just as occurred with our odd square case examined above, we are using the number of factor pairs (here, for 36/4), but this is slightly different from the number of factors: there are only three factors across the relevant factor pairs, but we count one of them (the 6) with multiplicity as it appears twice in the pair (6, 6). As a result, we end up with 36/4 = 9, which has 3 factors; adding 1, we get 4 factors; dividing 4 by 2, we get our answer: there are two ways to represent 36 as a difference of squares.

If we decide to summarize the above thinking succinctly, then we can use the ceiling function (rounding, if necessary, to the nearest integer greater than or equal to its input) for our final result. Defining d(n) to be the number of divisors, or factors, of the natural number n, and S(n) to be the number of ways in which n can be represented as a difference of nonnegative squares, we have the following:

If n is odd, then S(n) = ceil(d(n)/2);

If n is even but not a multiple of 4, then S(n) = 0;

If n is even and a multiple of 4, then S(n) = ceil(d(n/4)/2)

Finally, we recall that the number of factors can be computed if we know a natural number’s prime factorization. In particular, if we write n as a product of the primes pk raised to the respective powers of ak, then the number of factors is the product of (ak + 1) across all k. We close out with one more example.

The number 180 is an even multiple of 4; so, S(180) = ceil(d(180/4)/2). But, what is d(180/4)? Since 180/4 = 45, and 45 has prime factorization 3251, we have that its number of factors is equal to (2+1)(1+1) = 3(2) = 6; so, we find d(180/4)/2 = 6/2 = 3, and ceil(3) = 3. This tells us that the number of representations of 180 as a difference of squares is 3. Indeed, we can verify this by listing them out exhaustively:

180 = 2(90) = (46-44)(46+44) = 462 – 442;

180 = 6(30) = (18-12)(18+12) = 182 – 122; and

180 = 10(18) = (14-4)(14+4) = 142 – 42

Q. E. D.

[MP]

Here is a table that is worth spending some time mulling over. 

Here’s what it’s all about: differences of squares. 

Given a number, can you tell whether it’s possible to write that number as a difference of squares? Is it possible to characterize all the numbers that are possible to write as a difference of squares? And is there a systematic way to tell how many ways a given number can be written as a difference of squares? An algorithm? A formula?

Let’s tackle these questions in two parts:

  • Given a number, can you tell whether it’s possible to write it as a difference of squares, at all?
  • If a number can be written as a difference of squares, how many different ways are there to do it? 

I. 

To start the first question, let’s note that every odd number can be written as a difference of squares. This is due to a wonderful property of squares — they can be decomposed into a sum of odd numbers. Every odd number can be seen as the difference between a large square and some inner, removed square. 

That’s not really an explanation as much as restating the statement…ah, well. Here’s a picture:

So, let’s start checking out the small even numbers. Can they be written as a difference of squares?

Starting with the smallest, 2 can definitely not be written as a difference of squares. The smallest difference of squares is 22 – 12 = 3, so 2 is a no-go. 

How about 4? That’s a no. (32 – 22 = 5.) How about 6? Also a no-go. 

But, wait! 8 works: 32 – 12 = 8. 

So…why is that? Why can some even numbers be written as a difference of squares, while others cannot?

Any difference of squares can be written as the product of two numbers: a2 – b2 = (a-b)(a+b). This factoring move can help explain what’s going wrong with so many of these even numbers.

If 6 were to have a representation, then 6 = a2 – b2 = (a-b)(a + b). But there are only so many ways to write 6 as the product of two factors. To make matters worse, the only ways to factor 6 involve one even and one odd factor. To see why this is a problem, note that while 6 = 3 x 2, this couldn’t produce a difference of squares:

a + b = 3

a – b = 2

2a = 5

a = 2.5

b = 0.5

And while it is true that 2.52 – 0.52 = 6, we were only looking for whole numbers. 

The issue, then, is that some even numbers can be factored only into pairs of numbers where one is even and the other odd, i.e. of different parity. This explains why 8 works: 2 x 4 = 8, and setting a + b = 4 and a – b = 2 results in (3 + 1)(3 – 1) = 32 – 12 = 8. 

As long as the prime factorization of an even number N has just one factor of 2 (as in e.g. 6, 14, 42, 30) then it can only ever be factored into an even and odd factor. That will never work. 

As long as your even number is at least divisible by 4, it will always be possible to find at least one solution:

2n(2m + 1)

2(2nm + 2n-1)

a + b = 2nm + 2n-1

a – b = 2 

2a = 2nm +  2n-1+ 2

a = 2n-1m +  2n-2+ 1

b = 2n-1m +  2n-2– 1

Both a and b are integers.

There is one other issue to worry about, and that’s 4 itself. 4 = 2 x 2 = (a + b)(a – b) demands that a = 2 and b = 0. Should we count that? It is true that 22 – 02 = 4. It’s not so interesting, which argues in favor of tossing it out of consideration. But sometimes these sort of uninteresting cases can help simplify formulas and generalizations. 

Let’s keep an open mind, for now, as to whether we’d rather deal with differences of positive squares or might expand our focus (slightly) to include non-negative squares.

II.

Let’s start with odd numbers. Every odd number can be represented as a difference of squares. But how many representations are there for each odd number?

Consider numbers that are the products of primes, like 15 and 21. They can be all represented in two different ways as differences of squares. 

E.g. 15 = (8 + 7)(8 – 7) = (4 + 1)(4 – 1)

E.g. 21 = (11 + 10)(11 – 10) = (5 + 2)(5 – 2)

Then again, this comes as no surprise. If N = pq for p and q both prime, the only factor pairs are 1 x pq and p x q. All the factors are odd, so there are no parity problems — they all produce OK differences of squares.

Not much different for numbers like 147 or 75, which are a product of a prime and a square of a prime: 

E.g. 147 = (74 + 73)(74 – 73) = (26 + 23)(26 – 23) = (14 + 7)(14 – 7)

E.g. 75 = (38 + 37)(38 – 37) = (15 + 10)(15 – 10) = (10 + 5)(10 – 5)

All of this still makes sense — 147 and 75 have 6 factors, all odd. That leads to 3 factor pairs, all which work for differences of squares. 

In other words, all we’re doing is counting factor pairs, i.e. counting factors and dividing by 2.

E.g. 225 = 32 * 52= (113 + 112)(113 – 112) = (39 +36)(39 – 36) = (25 + 20)(25 – 20) 

= (17 + 8)(17 – 8) = (15 + 0)(15 – 0)

This makes sense for 225, which has 9 factors but (including its square root) 5 factor pairs; 9/2 = 4.5, round that up and you get 5.

There is a much better-known number theory function that counts divisors, and it’s multiplicative:

If m and n are relatively prime, divisors(mn) = divisors(m)divisors(n)

So, if you have the prime factorization of a number and its odd, no big deal, you can find out how many ways it can be represented as a difference of squares, no trouble:

E.g. p2 * q8 * r3 has 3 * 9 * 4 = 108 divisors, and can therefore be represented in 54 different ways as a difference of squares.

As a formula, for odd N, N can be described as a difference of squares in ceil[divisors(N)/2] ways.

III.

Even numbers

Now, how do we deal with even numbers? Meaning, those that are divisible by 4. (If they’re not divisible by 4, then they can never be expressed as differences of squares.)

When you have any even number, there are always parity problems you have when it comes to making a difference of squares:

E.g. for 24 = 23 * 3

1 x 24

2 x 12

3 x 8

4 x 6

E.g. for 80 = 24 * 5

1 x 80

2 x 40

4 x 20

5 x 16

8 x 10

E.g. for 84 = 22 * 3 * 7

1 x 84

2 x 42

3 x 28

4 x 21

6 x 14

7 x 12

Compare, in particular, the results for 21 and 84:

1 x 213 x 71 x 842 x 423 x 284 x 216 x 147 x 12

There are three times as many factor pairs you get from multiplying 21 by 22. But, of course, 4 of them result in mismatched parity.

Consider one other case, before we head towards a formula: 22 * 32 * 5 * 7. Let’s reason:

  • 32 * 5 * 7 should have 3*2*2 = 12 factors and 6 factor pairs, all of which work for differences of squares
  • 22 times 32 * 5 * 7 should therefore have 3*12 factors and 18 factor pairs
  • Some of these will have mismatched parity, though. We’ll have to toss out all the pairs with an odd factor; as we’ve said, there are 12 odd factors. 
  • That means 18 – 12 = 6 factor pairs, and we are left with what we started with. 

Generalizing, this means that for 2a * N, where N is odd:

  • N will have divisors(N) factors (all odd) determined by its prime factorization, and ceil[divisors(N)/2] factor pairs
  • 2a N should therefore have ceil[(a+1)divisors(N)/2] factor pairs
  • But you have to throw out divisors(N) of those factor pairs, since they contain an odd factor.

That leaves as the number of ways to represent this even number as a difference of squares:

ds(2aN) =  ceil[(a+1)divisors(N)/2] – divisors(N)

This formula is certainly ugly, but it works for the cases above, plus a few more:

Nds(N)
80 = 24 * 5ceil[5 x 1] – 2  = 3
24 = 23 * 3ceil[4 x 1] – 2  = 2
84 = 22 * 3 * 7ceil[3 x 2] – 4  = 2
180 = 22 * 32 * 5ceil[3 x 3] – 6  = 3
36 = 22 * 32ceil[3 x 1.5] – 3 = 2
60 = 22 x 3 x 5ceil[3 x 2] – 4 = 2 

IV.

Can we extend this formula in a meaningful way to even numbers that aren’t divisible by 4? 

42 = 2 x 3 x 7, ceil[2 x 2] = 4 = 0.

30 = 2 x 3 x 5, ceil[2 x 2] – 4 = 0

18 = 2 x 32, ceil[2 x 1.5] – 3 = 0

 Yes, I think so! 

For 21N, ceil[2 x divisors(N)/2] – divisors(N) = ceil[divisors(N)] – divisors(N)  = 0

Can we extend this formula to odd numbers, that aren’t even divisible by 2? Well, not really:

For 20N, ceil[1 x divisors(N)/2] – divisors(N)

But we can patch it up. Rather than subtracting divisors(N), let’s subtract badDivisors(N), which are the divisors of N that wouldn’t work for difference of squares tally. Of course, for odd numbers there are no bad divisors, so badDivisors(N) = 0 for ever odd.

Here is our ur-formula, then:

For 2aN, the number of difference of squares representations are:
ceil[(a+1)divisors(N)/2] – badDivisors(N)

For odd N badDivisors(N) = 0 so the formula simplifies to:

ceil[(0+1)divisors(N)/2] – 0 = ceil[divisors(N)/2]

For 21N, this simplifies to:

ceil[2  x divisors(N)/2] – divisors(N) = 0

Note that for the ceiling function, ceil[x + n] = ceil[x] + n where n is an integer. We can use this to prove something for another special case.

For 22N, our formula simplifies to:

ceil[3 x divisors(N)/2] – divisors(N) 

= ceil[divisors(N)/2 + divisors(N)] – divisors(N)

=ceil[divisors(N)/2] + divisors(N) – divisors(N)

= ceil[divisors(N)/2]

So multiplying an odd number by 4 does not alter the number of ways it can be represented as a difference of squares.

More generally, for 22mN:

ceil[(2m + 1) x divisors(N)/2] – divisors(N) =

= ceil[(divisors(N)/2 + 2m x divisors(N)/2] – divisors(N) 

=ceil(divisors(N)/2) + (m – 1)divisors(N)

Though maybe it makes more sense to consider the patterns of growth in two separate cases — the case of square and non-square odd Ns.

Nds(N)
21 = 3 x 7ceil[1 x 2] – 0  = 2
42 = 2^1 x 3 x 7ceil[2 x 2] – 4  = 0
84 = 2^2 x 3 x 7ceil[3 x 2] – 4  = 2
168 = 2^3 x 3 x 7ceil[4 x 2] – 4  = 4
336 = 2^4 x 3 x 7ceil[5 x 2] – 4 = 6
672 = 2^5 x 3 x 7ceil[6 x 2] – 4 = 8
Nds(N)
25 = 52ceil[1 x 1.5] – 0  = 2
50 = 2152ceil[2 x 1.5] – 3  = 0
100 = 2252ceil[3 x 1.5] – 3  = 2
200 = 2352ceil[4 x 1.5] – 3  = 3
400 = 2452ceil[5 x 1.5] – 3  = 5
800 = 2552ceil[6 x 1.5] – 3  = 6