Prospective students and their parents eat lunch and talk following a tour of campus at Duke University in Durham, N.C. in 2006. Photo by Gerry Broome/Associated Press.

It’s mid-winter, your college applications have been submitted, and you’ll soon be pacing the floor waiting to learn where you have been accepted. But will you emerge from college four years from now better off than when you started? Does college help turn students into scholars, with greater expertise, maturity, and cognitive abilities? How effective is college in helping students prepare for work or graduate school?

Put simply: How do we know how much we learn in college?

If you search for answers to these questions today, prepare to be disappointed. Popular college rankings such as U.S. News &World Report’s are based on factors such as subjective judgments of schools’ reputations and on the difficulty of gaining admission. Rarely if ever are rankings based on direct, value-added assessments comparing how well students perform when they graduate college with how they performed when they first enrolled.

It may seem odd that our colleges and universities—which study complex topics ranging from the spin on subatomic particles to the precise time of the Big Bang—would have so little data with which to assess their own effectiveness. What might cause these institutions to be so complacent, and so reluctant to pursue information that would help them understand their own impact on students?

Some colleges may fear that the results will prove to be embarrassing. Some may argue that college skills such as writing proficiency cannot be measured accurately (even though schools assign their students grade point averages with three digits of numerical precision).

But the biggest reason why college effectiveness doesn’t get measured is that schools, policy makers, parents, and students take for granted that undergraduates’ skills improve during college. This assumption of improvement may seem intuitive, but it is not backed up by much in the way of evidence. There’s been some study at the K-12 level, which is instructive. One such report, by the Coalition for Evidence-Based Policy and based on research conducted by the Institute for Education Sciences (IES) within the U.S. Department of Education, found that more than 90 percent of the interventions that schools adopted to improve learning outcomes for their students showed no evidence of effectiveness. In another set of studies described in the book Academically Adrift, more than 45 percent of college students showed no improvement in critical thinking during their time in college.

These sorts of studies should serve as a wakeup call—if schools aren’t measuring student learning, we cannot know whether students are actually learning.

Along with our colleagues, we recently published the results of a nine-year study designed to answer whether students finishing college write any better than they did when they first enrolled. Of course there is more to college than writing, but we studied writing because it is one skill that students, schools, and employers see as critically important. We selected a small private university in the Southwest as our test case, and we randomly sampled students for testing. We modelled our study as closely as possible on randomized clinical trials, the same standards used to determine whether new medicines have their intended health benefits. We tested students both cross-sectionally (comparing first-year through fourth-year students on a single day) and longitudinally (tracking specific students over the course of their undergraduate years).

There was good news. We found that students improved their writing scores, as judged by expert assessors of writing who were blind to the identities of the students and to the purpose of the study. That improvement was approximately 7 percent from the first to the fourth year of college, a statistically significant increase. The same degree of improvement was found in both persuasive and expository writing, for both the cross-sectional and longitudinal data, for both male and female students, and for both humanities/social science majors and engineering/natural science majors.

Our findings, while showing that learning is happening, also suggest an opportunity for improvement: Now that we have a benchmark, we can test new instructional interventions to see how much they improve upon (or prove worse than) the status quo. While 7 percent improvement is not trivial, we would hope that it would be possible to do better. However, in order to know if new programs and interventions are actually leading to improvements, schools need to engage in value-added assessment of their students. Without such testing, we will be navigating blind.

For college administrators who believe that studies such as ours are too expensive and time-consuming to conduct, we encourage them to think again. Universities spend countless hours and resources developing curricular requirements, establishing tutoring centers, and otherwise attempting to improve undergraduate instruction. But they typically fail to establish a formal assessment system to determine whether those interventions are effective.

Studies like ours are simple and inexpensive compared with other common initiatives on campus. And such studies are the only way we can know whether schools are accomplishing their goals.

We hope that universities will begin testing their entering students, not just on their writing skills but on other critical skills as well, so that four years down the road they can see whether their teaching has made a difference. When you bother to collect the data, before-and-after-college comparisons are not that hard to make, and they can make a big difference.

 

James R. Pomerantz, a professor of psychology at Rice University, has served as dean of social sciences at Rice and as provost and acting president at Brown University, where he first became interested in value-added assessment. Daniel Oppenheimer, a professor of psychology and management at UCLA, has taught classes on higher education reform, won multiple teaching awards, and published numerous peer-reviewed articles on barriers to student learning and interventions to improve student learning. Their academic article on their study is available here.

BROUGHT TO YOU BY