Why Traditional Placement Testing Is Being Replaced by Multiple Measures
December 2012, Volume 25, Number 12
By Brad Bostian
Traditional placement testing currently places the majority of community college students into developmental classes. The tests are inexpensive, typically costing less than $10 per student, and they take about one hour and forty minutes to complete. The real costs of the tests come from their weakness as predictors of student performance in college, and the fact that only a minority of students will complete their prescribed developmental sequences. One solution might be to do what researchers and test publishers have long advised: use multiple measures to place students. Research going back decades has pointed the way to a multiple measures approach that includes the use of high school grades.
Problems with Placement Testing
Traditional placement tests, including ACT’s COMPASS and The College Board’s ACCUPLACER, are valid predictors of college performance. A student with maximum scores is more likely to succeed in college classes than a student with minimum scores. Even so, the correlation between placement tests and college grades is very small, meaning that tests only predict a small percentage of the variance in college grades and success rates. Somewhat better are admissions tests, also from The College Board and ACT. The SAT and ACT tests are stronger predictors of college performance, with a standard error of measurement around 7 percent of the score range, compared with roughly 10 percent of the score range for placement tests. Admissions tests also take longer and are considerably more expensive. In part, placement tests have been designed to community college specifications in order to keep the admissions door as open as possible by reducing hurdles, including time and expense. However, there is convincing evidence that placing students based on quick, efficient tests alone constitutes a false economy: it saves minutes and dollars now, but costs students additional semesters and thousands of dollars, and for institutions, means fewer completers and millions of dollars more in the long run.
Placement testing is based on the theory that by examining students’ content knowledge, college readiness can be determined. However, college readiness includes not only content knowledge, but other factors such as student expectations, motivation, finances, social and family situations, memory and intelligence, learning habits, and knowledge of the college environment. Placement of students into college-level classes without assessing these other factors limits the effectiveness of that placement.
A third problem with placement testing is the system into which it places students. Developmental course sequences often involve three or more non-credit course levels, and fewer than 20 percent, sometimes fewer than 10 percent, of students ever complete these prescribed developmental course sequences. When the average student stays at a community college for only three semesters, it’s clear that the majority of students will not complete a credential.
Additional problems include the fact that placement tests use mostly multiple-choice questions that largely do not match the activities and assessments students are expected to perform in college classes. Placement tests do not diagnose specific strengths and weaknesses and so students are placed into semester-long experiences rather than having their particular needs addressed. In addition, most students do not study or review for the tests, and therefore underperform based on their ability. This is one significant reason why K-12 education looks as though it isn’t doing its job in preparing students for college. In fact, many more high school graduates could succeed when placed directly into college-level classes, if given the opportunity.
Better Ways of Assessing College Readiness
The perfect system of assessing overall college readiness would measure many factors, both content- and non-content related. Fortunately, there is a measure that incorporates most of the relevant factors. Since the 1920’s, researchers have studied the predictive power of high school grades—and admissions and intelligence tests—relative to college success. The earliest results were mixed, but it soon became apparent that the high school grade point average (GPA) is a better predictor of college success than tests. The GPA is also considered a multiple measure, since students earn a high GPA by exhibiting superior learning habits, having strong content knowledge, and maintaining high academic standards and expectations, as demonstrated across varied tests and assignments. Students attend school, do their homework, and comply with processes. Research has clearly confirmed high school GPA is a superior predictor of college success relative to placement and admissions tests, yet so far, high school GPA has largely been left off the table.
Why GPA? Why Not High School Math and English Grades?
Counterintuitive as it may be, grades in specific subjects aren’t nearly as good at predicting college course success, even for courses in the same discipline. This is because college readiness is so much more than content readiness. As a multiple measure, the GPA averages the student’s response to a variety of learning situations and procedural demands, as well as various instructor teaching methods and styles. It is that average that creates the predictive power. In addition, while placement tests are fast and efficient, GPA is potentially free and immediate.
Research performed by Clive Belfield and Peter Crosta (2012), from the Community College Research Center, assessed the relationship between North Carolina public high school transcript information and data from those students who attended one of the 58 colleges in the North Carolina Community College System. By far, unweighted high school GPA was the superior predictor of college success when compared with tests or even specific aspects of the high school transcript. The results were so convincing that the state is now considering a proposal to use high school GPA to place students into college level classes.
Putting Multiple Measures into Practice
There are two main ways to construct a system of multiple measures. One would be to combine measures into a formula, and use that to place students. For example, a formula could add a placement test score, the high school GPA, and the student’s reading level. Multipliers could be used to adjust the score scales to match each other, or to weight one or more measures more heavily, and a cut score could be established. A cut score is a minimum score level required for a student to place at a higher level. With a formula, the overall score would be the most important factor. This would allow a student with a very high placement score to have a somewhat lower high school GPA, or vice versa, in order to reach the overall cut score for the formula. From a technical perspective, this requires a relatively sophisticated information system, and students who have all the relevant measures. Many students don’t have a high school transcript, for example, or have one from a different state, country, or decade.
A simpler way to adopt multiple measures is to use one measure at a time, and this is the approach under consideration in North Carolina. If students have a high enough score on measure A, they can be placed at college level. If not, consider their scores on measure B, then C, and so on.
What GPA to Use?
Deciding what cut score to use is a political question, not a scientific one. When you raise the minimum score, you deny access to some students who would have been successful in college classes, and impede student progress by stacking up additional course requirements for those students. When you lower the score, students granted access to college-level classes may not be fully prepared for those classes, as they skip past developmental courses that may have helped them. However, recent research studies have shown that colleges are probably putting more students into developmental classes than will actually benefit from them academically.
Currently, approximately 60 percent of community college students take developmental classes, and up to 90 percent place into developmental classes when they take placement tests. The use of high school GPA can go a long way toward reversing those numbers.
In the North Carolina study (Belfield & Crosta, 2012), 52 percent of the students had high school GPAs of 2.5 or higher, and only 6 percent had high school GPAs of 3.5 or higher. Requiring a GPA of 3.5 would dramatically reduce the number of students benefiting from the proposed policy, while requiring a GPA of 2.5 would increase access while leaving college course success rates largely unchanged. Choosing the right level means balancing access with success in a way that limits developmental placement and increases program completion, without harming course success rates. This is possible because students with high grades in high school have already demonstrated academic success, but students who score high on placement tests have only demonstrated their content knowledge.
The North Carolina proposal will begin the placement process by seeing if students meet a minimum high school GPA threshold to place directly into college-level classes. Students falling below that level will be placed into college-level classes if they meet minimum scores on the ACT or SAT. Students not meeting either benchmark will be given a diagnostic placement test to determine what specific developmental courses or modules they need. The proposal will likely be presented to the State Board of Community Colleges in early 2013.
Community colleges have a unique but realistic chance to significantly reduce the time and cost of their students’ academic journeys simply by changing placement practices in a manner supported by research. A multiple measures approach using high school GPA for placement will not sacrifice educational quality, but it will further the completion agenda and help many more students to achieve their educational goals.
Belfield, C. & Crosta, P. (2012). Predicting success in college: The importance of placement tests and high school transcripts (CCRC Working Paper No. 42). New York: Community College Research Center, Teachers College, Columbia University.
Brad Bostian is Director, First Year Experience, at Central Piedmont Community College in North Carolina.
Opinions expressed in Leadership Abstracts are those of the author and do not necessarily reflect those of the League for Innovation in the Community College.
Elizabeth Smith wrote on 12/04/12 9:13 AM
This is a very interesting article. The statistic of only 10 - 20% of developmental students completing their developmental course sequence is new to me. I believe the literature put out by the Lumina Foundation has that statistic at 25%. Has it really dropped to as low as 10%? Is this a nationwide statistic? I admit I am skeptical about using high school GPAs. I have had countless developmental students who are shocked to be in a developmental course because they were in honors classes in high school and had a good GPA. These same students, however, have large vocabulary deficits, don't know how to avoid serious sentence errors or even that such a thing exists, and lack the critical thinking skills needed to make a logical argument.
In addition, I believe looking at length of time spent in developmental courses as the cause of low completion rates (whether that rate is 25%, 20% or 10%) is a false cause fallacy. For many semesters, I've had my students write about why students drop out of college. Finances, work scheduling conflicts, health issues, family problems, and immaturity are the reasons that top the list. Never have I had students list the reason students drop out of college as having to enroll in six week, eight week or sixteen week developmental courses.
Tyler Beamer wrote on 12/04/12 9:41 AM
I agree that using multiple measures to assess a students academic needs is an important task. Most colleges and universities use GPA, SAT, references, etc.. to determine the quality of a student. However, using 1 component such as a high school GPA is not an acceptable way to measure whether a student has math or reading skills. Using a high school GPA alone to measure college readiness is actually the opposite of multiple measure. Just because a high school student has a 2.5 cumulative GPA doesn't guarentee that they passed any of their high school math or reading with a grade of even a C or better. Students take many vocational and elective courses over a high school career. In the senior year many students take "fluff" courses such as weightlifting or home economics that factor into their cumulative GPA, in most cases raising it. Often times high school students go a year or more between their last math and reading class till they start taking college classes. Even if a student made a B in Algebra after more than a year, how much do they remember if any? Also, teachers from school to school or even within the same school teachers grade vastly different. Some teachers give lots of participation or notebook grades to inflate the students grades, where some grade based on what students actually know or can produce. If you are going to use a GPA to place students out of math and reading developmental courses, then we should look at the GPA in the math and reading courses alone, not cumulative. Developmental education gives students the basic skills that they need for curriculum classes. If you place students out of developmental when they in reality need developmental then you are doing nothing but setting them up for failure.
Trisha Miller wrote on 12/05/12 5:35 AM
I think this is a great idea. It is amazing that it has taken this long to head in this direction.