Matt's Latest SAT/ACT News Update
Matt O'Connor
Jun 19, 2025
The College Board has made significant changes to the SAT in recent years, in part to more effectively compete with its rival, ACT. The changes include making the test shorter (from 3 hours to two hours and 14 minutes), offering more time per question, and making the test forms more secure (through computer-administered adaptive testing). But has the SAT become easier, and therefore perhaps a less useful tool to determine student ability? An executive at the Classic Learning test, an emerging rival to SAT/ACT, says it has.
[Excerpts]
As the policy director for the Classic Learning Test (CLT), I’ve had dozens of conversations with lawmakers across the country about college entrance exams over the last year. Surprisingly, the topic that has drawn the most intense scrutiny has not been the CLT: It’s been changes made to the SAT in 2024 (and similar changes to the ACT being implemented right now).
First, most lawmakers are surprised to learn that the tests change at all. They are then flabbergasted to learn what the most recent changes to the SAT were.
The most noticeable changes were to the structure of the exam. The paper exam was scrapped, and in its place the College Board implemented a computer-based test that is adaptive, meaning students are served easier or harder questions in later portions of each section based on their early performance.
But while these changes were noticeable, they were not the most noteworthy. Many state exams are adaptive, and adaptive testing has been studied by psychometricians for decades. The way adaptive testing was implemented in the new SAT, though, caused eye-popping ripple effects—for those who were looking, that is.
The College Board notes on page 13 of its Digital SAT Suite of Assessments technical framework that two of the primary goals in changing the exam were to make it shorter and to give students more time per question. To make this happen in the new “Reading and Writing” section of the test, they shortened reading passages from 500-750 words all the way down to 25-150 words, or the length of a social-media post, with one question per passage. Their explanation is that this model “operates more efficiently when choices about what test content to deliver are made in small rather than larger units.”
I beg to differ. Given the challenges professors are now experiencing with students’ college-level literacy and attention spans, one may counter that being able to pay attention to and analyze texts of extended length on complex subject matters that one may not find immediately entertaining should indeed be a prerequisite for college. And an objective and consistent standardized exam is a valuable means to measure such ability, especially amidst the rise of rampant high-school grade inflation.
But, rather than hold students to a clear and rigorous standard, the College Board is catering to students’ declining performance and social-media-induced attention-control issues.
This extends to the changes made in the new SAT math section, as well. College Board now serves test-takers fewer questions but did not reduce the amount of time for the section correspondingly. Students taking the post-2024 SAT now have 1.6 minutes per question, compared to 1.3 minutes on the 2015-2024 SAT. (The ACT and CLT provide 1.1 minutes per question.) Additionally, a calculator can now be used for the entirety of the SAT math section.
It’s hard to predict the extent to which these changes may decrease the rigor of the SAT math section. However, they comport with a more than 15-year trend. Researchers at the University of Cincinnati trained an AI program to do SAT math questions going back to 2008, and it found that the test has been getting easier by about four points per year.
Finally, the optional essay was eliminated completely.
These changes result in a measurably different test. Consider this: The new SAT Reading and Writing section correlates with the pre-2024 SAT Reading and Writing section at a rate of only 0.85 to 0.86, according to the SAT’s internal concordance provided in their framework. Meanwhile, the ACT’s “Reading” and “English” sections and the CLT’s “Verbal Reasoning” and “Grammar/Writing” sections correlate with the pre-2024 SAT’s Reading and Writing section at rates of 0.884 and 0.9, respectively. In other words, the ACT and CLT correlate with the old SAT better than the current SAT does.
This may change for the ACT, however. The company began implementing several significant changes in April of this year, which seem to mirror the new SAT.
For the most part, the content-level changes made to the SAT have garnered little pushback. Concerns that have been raised have focused primarily on the test being computer-based rather than paper-based. A likely reason for this seeming indifference is that, well, what can anyone do about it?
The College Board and ACT have lobbied state lawmakers for decades to secure a monopoly or duopoly position in every state. Besides the tests’ usually optional use for college admission (though 80 percent of applicants still take the exams, and usually only low-performing test-takers choose to withhold scores), test scores are often tied to state-funded scholarships, required for high-school graduation, used to fulfill federal K-12 testing requirements, and more. By setting policies that disallow competition, lawmakers mistakenly tie a consistent mandate to an exam that changes dramatically.
One might hope that America’s two biggest college-level testing companies would use their position of power to push for higher standards—competing to be the exam that best reveals exceptional students. Instead, the opposite is the case.
As test-optional college-admission policies have proliferated, the College Board and ACT seem to have reacted with a bit of panic. Rather than offer a consistent standard of academic excellence, these companies are competing to offer the least unpleasant product to 17-year-olds.
“If we’re launching a test that is largely optional, how do we make it the most attractive option possible?” Priscilla Rodriguez, College Board senior vice president of college readiness assessments, told Chalk Beat in an article about the changes. “If students are deciding to take a test, how do we make the SAT the one they want to take?”
It is our hope at CLT that this unfortunate reality is set to change in favor of rigor and merit.
Authors of a new "validation review" of the SAT claim that their research indicates that there is a "hidden bias in college admissions tests."
[Excerpts]
At first glance, calls from members of Congress to restore academic merit in college admissions might sound like a neutral policy.
In our view, these campaigns often cherry-pick evidence and mask a coordinated effort that targets access and diversity in American colleges.
As scholars who study access to higher education, we have found that when these efforts are paired with pressure to reinstate standardized tests, they amount to a rollback of inclusive practices.
A Department of Education letter sent to congressional offices from Feb. 14, 2025, stated that it is “unlawful for an educational institution to eliminate standardized testing to achieve a desired racial balance or to increase racial diversity.” The letter also claimed that the most widely used admissions tests, the SAT and ACT, are objective measures of merit.
In our recent peer-reviewed article, we analyzed more than 70 empirical studies about the SAT’s and ACT’s roles in college admissions. Our work found several flaws in how these exams function, especially for historically underserved students.
Our research shows that while these tests are statistically reliable – that is, they produce consistent results for students across subjects and during multiple attempts under similar conditions – they are not as valid as some argue.
High school grade-point averages are typically better predictors of students’ success in college than either test.
In addition, the tests are not equitable or similarly predictive for all students, especially given gender, race and socioeconomic demographics.
That is because they systematically favor those with more access to high-quality schooling, stable socioeconomic conditions and opportunities to engage with test prep coaches and courses. That test prep can cost thousands of dollars.
In short, both tests tend to reflect privilege more than potential.
For example, students from higher-income households routinely outperform their peers on the ACT and SAT.
This isn’t surprising, considering wealthier families can afford test prep services, private tutoring and test retakes. These advantages translate into higher scores and open doors to selective colleges and scholarship opportunities.
Meanwhile, students from low-income families often face challenges – such as less experienced instructors and less access to high-level science, math and advanced placement courses – that test scores do not factor in.
In our published review, we found that these disparities aren’t incidental – they’re systemic.Our review revealed long-standing evidence of bias in test design and differences in average scores along lines of race, gender and language background.
These outcomes don’t just reflect academic differences; they reflect inequities that shape how students prepare for and perform on these tests.
We also found that high school GPA outperforms standardized tests in predicting college success. GPA captures years of classroom performance, effort and teacher feedback. It reflects how students navigate real-world challenges, not just how they perform on a single timed exam.
For many students, particularly those from historically marginalized backgrounds, grades can offer a better indication of how prepared they are for college-level work.
This issue matters because admissions decisions aren’t just technical evaluations – they are value statements. Choosing to center test scores in admissions rewards certain kinds of knowledge, experiences and preparation.
It’s worth noting that research on testing often focuses on elite institutions, where standardized test scores are more likely to be used as high-stakes screening tools. Our systematic review found that, even in elite schools, the tests’ ability to accurately predict college academic performance is often limited (moderate in statistical terms).
But most college students attend state universities, public regional universities, minority-serving institutions, or colleges that accept most applicants. Our study found that at these institutions, standardized test scores are even less likely to predict how students will do.
This may be because state universities and public regional universities are more likely to serve highly diverse student populations, including older, part-time and first-generation students and those who are balancing work and family responsibilities.
An op-ed in Forbes asks "Is the College Board a Non-Profit or a $1.6 Billion Testing Monopoly?"
[Excerpts]
Founded 1900 to democratize college access, the College Board now straddles an uncomfortable line between its nonprofit mission and corporate-scale revenues. While technically structured as a member organization—with 6,000 high schools and colleges paying annual dues—its financial reality tells a different story.
The math reveals a stark imbalance. Since its inception, cumulative membership dues pales next to the billions reaped from SATs and AP exams since 1990. This reliance on testing revenue has reshaped the organization's priorities, transforming it from a collaborative membership alliance into a de facto corporate entity with a testing monopoly.
The Profit Playbook: Testing and Underpaid Labor
Two strategies underpin the College Board’s financial dominance. First, its testing empire operates like a well-oiled machine. The SAT suite—taken by 2 million students annually—generates $200–300 million from base fees and ancillary charges like $15 score reports. Meanwhile, the Advanced Placement (AP) program, which administered 5 million exams at $99 each in 2025, rakes in nearly $500 million, supplemented by millions from course materials and teacher training . Even middle schoolers are monetized through the PSAT 8/9, an exam for 13-year-olds that locks schools into multi-year testing contracts.
Second, Perhaps most ethically fraught is its reliance on underpaid educators. Teachers grade AP exams for about $30 per hour—less than half the rate of private tutors—similar to the honorarium paid to SAT and AP proctors. Schools generally pay the cost of proctoring the PSAT.
School districts continue paying teachers’ salaries during school-based testing, such as the PSAT and AP’s. The College Board separately compensates AP proctors and some districts retain the funds, arguing that instructors are already on the payroll. For districts, the arrangement carries a hidden cost: they receive no reimbursement for salary expenses during proctoring hours, effectively subsidizing the administration of these exams. Compounding the issue is the significant loss of classroom instructional time—a resource critical to student outcomes—which translates to an additional, often overlooked, toll on both districts and learners. This labor model saves the College Board millions of dollars annually, often subsidizing profits through public school budgets.
The College Board lobbied aggressively to preserve SAT mandates. New America notes that the College Board spends millions each year to lobby state and federal representatives to maintain their market position. This decision aligns with, according to Pro Publica, CEO David Coleman’s over $2.5 million compensation package—triple the average for nonprofit leaders—raising questions about whom the organization truly serves.
Reform or Revolt? The Path Ahead
The College Board must undergo a radical transformation to reclaim its nonprofit mission. Some possible reforms include having executive pay should align with nonprofit norms (under $500,000), not corporate benchmarks. Testing for students younger than 16 ought to be removed, which would free schools from policies that not only can be costly but also may not be the best fit for students’ development. Proctoring and grading labor must be fairly compensated.
Until these reforms materialize, the organization’s 125-year legacy will remain shadowed by a question at the heart of its identity: Who benefits most—students or shareholders?
The College Board’s nonprofit status hinges on a delicate balance—one increasingly tilted toward Wall Street, not classrooms. As education evolves, stakeholders must demand accountability from an organization that shapes millions of futures.
Akil Bello takes a deep dive into how colleges differ when it comes to calculating an applicant's high school GPA:
[Excerpts]
In the last few weeks, I attended and presented at both the NJ and NY Association of College Admission Counseling conferences where I spoke to lots of colleagues and friends about the admissions process. One consistent theme that emerged is that while the inputs from students are similar (classes, grades, sometimes APs and scores, sometimes essays and recommendations), the way colleges consider them is vastly different. Understanding this might be one of the biggest misunderstandings in the national conversation about admissions. We’ve all seen the stories of the student with the seemingly high GPA getting rejected from multiple schools. We’ve probably even clicked the clickbait and empathized with that student. The problem is that this narrative is just wrong.
There is no such thing as “a GPA.”
Ok, let me be more clear. There is almost no way for a student to know what GPA is being used by each college they apply to. This is one of the consequences of America not having a nationalized education system. Every institution gets to set its own curriculum, choose how it considers information, and determine what’s important. When I compared notes with a few friends who also worked as admissions readers, almost everything about how the various institutions we worked for read files was different: reading processes, how we look at files, how grades were recalculated, how essays and recommendations counted, and even how long we were expected to take with a file. The only thing in common was that we all looked carefully at the high school curriculum (not GPA . . . curriculum). So as you read this and think how un-standardized high schools and colleges are, remember that America chose diversity (state’s rights, etc) over standardization. We don’t have a standardized K-12 system and don’t have a standardized higher education system, but expect the process of transitioning between the two to be standardized.
Anyway, let me get off my soapbox and back to the nitty gritty. You don’t/can’t know your GPA!
How Do Colleges Calculate GPA?
Surprise! There is no industry standard on what is included in a GPA, how to convert from numeric to letter grades, what classes count as advanced (AP, IB, honors, etc.) and should be given extra weight, or even how much weight to give advanced classes. Each high school makes its own decision about this, and each college makes its own decision.
Some colleges, when recalculating GPA, consider 9th grade through the first half of 12th grade. Some will only consider classes from 10th and 11th grades. Some will give extra weight to AP, IB, dual enrollment, honors or other “advanced” classes; others will not.
What’s the upshot?
The only way to be sure of how the college views your GPA is to take all the hardest classes in the country and get all As (which obviously no one can do). Otherwise, you can try to learn how every college you’re applying to recalculates GPA and evaluates rigor (not impossible but really difficult). Neither of these is likely a productive use of time.
My advice in this and in all things admissions is that students should focus on controlling what they can control, doing the best they can in school, taking courses that interest and challenge them, and participating in activities that interest them and help them grow in understanding themselves and possible careers. If a student chooses to focus on the most selective/rejective colleges, then they have to accept that there is not one magical thing that will guarantee admission.
In an article titled "Colleges Know How Much You’re Willing to Pay. Here’s How", The New York Times examines the increasingly common practice of colleges using sophisticated data analysis to determine how to extract maximum tuition payments from an enrolling class.
[Excerpts]
Last month, four Republicans from the House and Senate sent letters to the presidents of Ivy League schools demanding years of data about how they decide what to charge.
These institutions, the letters said, “establish the industry standard for tuition pricing, creating an umbrella effect for all colleges and universities to justify higher tuition costs than they could otherwise charge in a competitive market.”
In fact, no more than a few dozen other schools can command Ivy League prices from a high percentage of their students and their families. Every other private institution — and most public ones — compete brutally on price up until the May 1 reply date each year (and sometimes afterward). The average tuition discount among private colleges is now over 56 percent for first-time, full-time students.
Those discounts — which often come in the form of merit scholarships — can make a six-figure difference in what families pay over four years. This aid is different and often less predictable than the need-based kind that depends on a family’s income and assets.
The driving force behind college pricing is not some evil genius at Harvard or Penn. Instead, it’s a series of algorithms developed quietly over decades by consulting firms operating just out of sight. The two biggest — EAB and Ruffalo Noel Levitz, or RNL — are owned by private equity firms.
To understand how all this happened — and how things really work today, for families and the financiers hoping to make money off this opaque system — we need to turn the clock back 50 years to when an unlikely character took over the admissions department at Boston College and upended everything.
Jack Maguire attended Boston College as an undergraduate and stuck around for a Ph.D. in physics. Not long after earning the degree, he took up a post as an assistant professor in 1968.
Today, Boston College has a $4.1 billion endowment and rejects 87.5 percent of applicants. But when Mr. Maguire started working there, it was a struggling commuter school running a deficit.
When Mr. Maguire examined the college’s data, he smelled opportunity. What if the school gave out precision-guided discounts based on the quality of the applicant even more than it did based on what students could afford? Turns out, when you do that, more of the above-average students say yes to the offer.
As new patterns emerged, Mr. Maguire fed data into computers. The machines had additional suggestions. Experiment and iterate, repeat until solvent.
Word of Mr. Maguire’s results spread quickly in the clubby world of admissions. > In 1983, having helped turn Boston College around, he and his wife, Linda Cox Maguire — then the director of admissions at nearby Simmons College, now Simmons University — started their own consulting firm, Maguire Associates.
Within 10 years of the founding of Maguire Associates, the predecessor firms for Ruffalo Noel Levitz and EAB were getting off the ground. They did what Mr. Maguire had done at Boston College, but they developed other tools, too, and became soup-to-nuts school whisperers.
One or both can help a college buy hundreds of thousands of names of teenagers who have taken the ACT or SAT, market to them across various media, improve retention once they arrive on campus and raise money from alumni more effectively.
For many years, the firms described the Maguire-esque part of their offerings as “financial aid leveraging.” Eventually, worried that the term might evoke images of the firms using money as a crowbar to wedge themselves into teenagers’ brains and parents’ pocketbooks, they rebranded their service as the more benign “financial aid optimization.”
Maguire Associates never grew anywhere near as big as EAB and RNL, and those two juggernauts have not been shy about the zealousness with which they made their industry more like the Wall Street firms that invest in them.
“I actually think of financial aid optimization as a form of arbitrage,” Madeleine Rhyneer, whom EAB refers to as its “dean” of enrollment management, said on a company podcast about how admissions offices “actually” work. “Really, it is. It’s like working in the financial markets.”
Eileen K. O’Leary spent 34 years in the financial aid trenches at Stonehill College, outside of Boston, before retiring in 2017. There, she purchased consulting services from Brian Zucker [founder and chief executive of Human Capital Research Corporation, which has been competing with EAB and RNL for years].Over time, she felt a growing amount of pressure to offer bigger discounts to more people who didn’t need them. After all, there were usually competitors down the road with a different consultant whispering in their ears, urging them to cut the price further. Then, more families realized they could play schools against one another.
“I was old school, and I thought financial aid was for improving access, but it no longer was,” she said. “It was a business model.”