At the beginning of March, on the eve of the first SAT date of the year, the Wall Street Journal ran an article entitled, “The Truth about the SAT and the ACT.” Naturally the headline grabbed me, and I read through the piece.
Written by Drs. Nathan Kuncel and Paul Sackett, two psychologists at the University of Minnesota, the article argues that SAT and ACT scores are as important as ever. According to the authors, test scores are accurate predictors of student success, both in college and in their future careers. As such, the authors argue that the high importance placed on test scores by college admissions officers is justified.
To prove their point, the authors seek to debunk a number of myths that have been raised about standardized exams in recent years, one of which was the following:
“Myth: Test Prep and Coaching Produce Large Score Gains.”
Naturally, this one made my eyebrows go up. I read on. To justify their claim, Kuncel and Sackett cite a study, coauthored by Sackett, in which 4,248 high school students were sampled, and the average score increase due to tutoring and coaching was, according to the authors, “14 points on the math test and 4 points on the verbal.” That is, essentially nil.
As these numbers are so foreign to my own experience of helping students achieve far more substantial score increases, I needed to read through the researchers’ paper myself, to learn more about their methodology. The article in the WSJ provided a link to their study (I’m linking it again here), which was published in the academic journal Personnel Psychology. I clicked, downloaded, and read the PDF of the journal article. Here’s what I found.
An admirable study, with limitations
In his paper (coauthored with Brian Connelly of the University of Toronto), Sackett candidly acknowledges the difficulties in measuring the effectiveness of a treatment such as SAT preparation. There are so many confounding variables—students who pursue test prep tend to be highly motivated, they tend to already be high performers in school, they tend to come from families that strongly value education, etc. Sackett rightly acknowledges that any or all of these variables could necessarily be a reason for higher test scores, and not the test prep itself. So, to remove some of this “self-selection bias,” Sackett uses a technique called “propensity scoring,” in which he matches like students to other like students. The ideal pairing would be something like the following:
Mark A and Mark B attend the same high school, live in the same neighborhood, attend the same classes, have similar GPAs, and their parents earn similar incomes and have similar levels of education. Mark A undergoes private SAT tutoring, while Mark B does not. Let’s look at the increases in their test scores before and after Mark A undergoes tutoring, and see if there is any significant difference between the two.
This is an idealized account, but Sackett’s “propensity scoring” attempts to match students who come from very similar backgrounds, like the two Marks, in an attempt to control for other confounding variables. In doing so, Sackett is hoping to show that any difference in score increases must be solely due to the SAT tutoring. While this methodology is certainly more rigorous than just taking any old random sample of students who did some preparation and compare them against those who didn’t, there were issues with the study that led me to find its conclusions less than convincing.
No attempt to define “private tutoring”
Sackett distinguishes between students who underwent “private tutoring” and those who did not. However, he and his coauthors make no attempt to define what kind of tutoring program the students were exposed to, or for how long they were exposed to it. To my mind, this makes any conclusion about the effectiveness of tutoring seriously flawed. To illustrate why, compare the following two students:
- John receives private tutoring for thirty minutes the night before the SAT.
- Mary receives private tutoring twice a week for three months leading up to the SAT. Each of her lessons is ninety minutes long, for a total tutoring regimen of three hours per week. After each lesson, Mary is assigned a custom-tailored set of homework problems designed to specifically strengthen her weak areas. Every few weeks, Mary is given a full-length practice exam to complete on Saturday morning, timing herself to simulate real test conditions. This not only gives her practice taking the exam, but it also helps to motivate her as she begins to see the progress she’s making.
It should be obvious which of these two students will see a large score increase, and which will not. And yet, according to my reading of Sackett’s study, both John and Mary would get lumped together as having been “privately tutored.” Of course, John, seeing little or no increase, will bring down the average, leading to the faulty conclusion: “private tutoring realizes negligible gains.”
A study I’d love to see
I would love to see a study comparing the type of tutoring program I prescribed to Mary—regular and frequent, with minimum ninety-minute durations, with sufficient support and practice for the student—and compare score increases for students who undergo that type of private tutoring to those who have no tutoring at all. I’d gladly endorse Dr. Sackett’s methodology of propensity scoring, to match students of similar backgrounds and control for confounding variables. From my experience, I believe such a study would yield some very different results.
Paul King is a private tutor based in Manhattan. Over the years, he has coached nearly one hundred students for the SAT and the ACT, helping them to build intellectual and personal confidence, and to achieve their full potentials.