Confronting Our Grades: A Reading Program Takes a Hard Look at Its Effectiveness

Author: 
Grant J. Matthews, Rita Ferriter, Susan Godwin, Jennifer Lee-Good, and Michael Morsches
August
2022
Volume: 
25
Number: 
8
Learning Abstracts

In 2012, the developmental reading faculty at Moraine Valley Community College (MVCC) began to take an honest look at their students’ grades. This introspection followed key changes in department leadership and faculty membership. The initial analyses uncovered three pertinent facts: (1) developmental reading faculty were not passing as many students as they thought they were; (2) students in the lowest level of reading were not persisting into credited courses; and (3) there was severe grade compression among the passing grades (i.e., most of the assigned grades were Cs with very few As awarded). These results prompted the faculty members to begin working with their department chair and dean to make sense of the data and to prioritize a coordinated response. After pondering what the grades they assigned actually meant and the efficacy of those grades in predicting future college success, they chose to respond to grade compression first. To facilitate the response, the program developed two new predictive models for estimating student success in the reading course sequence.

Overengineering and the Siphoning Effect

Given the limitations of traditional course and semester sequences, reading is a very complex skill to remediate in a community college setting (Vacca, 1997; Vacca & Vacca, 2008). Additionally, there are numerous approaches to college reading with no clear consensus in the field for how best to approach the task of getting students up to college-level reading skills in a semester or two (Dolainski & Griffin, 2011; Mather & McCarthy, 2016). Just as psychology and sociology suffered from “soft science” criticism (Kuhn, 1962; Storer, 1967), college reading departments may try too hard to operationalize a skill that is more art than science. Over the past few decades, the MVCC developmental reading program attempted to combine a holistic approach with a more detailed and defined parsing of the reading construct. In doing so, the faculty began to create reading courses with increasingly intricate assignments and grading schemes.

In a sense, the courses became overengineered—the gestalt of the reading process became itemized too closely and the subsequent assessment (grades) attached to those components drifted away from a more natural and authentic process. In other words, the process of reading was compartmentalized to the extent that a natural progression of skill development and documentation of development through assessment had been disrupted. Without question, the students who passed the course were hard-working and persistent—an important learning outcome. However, the resulting grades upon exiting the sequence did not fully acknowledge the gains and grant students access to progression, the more applicable learning outcome.

As students progressed through the exacting regimen of the reading sequences, they steadily lost minor points on various assignments to the degree that when they reached the latter stages of the semester—when more demanding and authentic reading tasks are required—the students had often hemorrhaged enough points for it to be mathematically impossible for them to pass. As one faculty observed, “Despite becoming better readers, some students were unable to pass because of the point system.” This siphoning of points throughout the semester led the developmental reading faculty to a critical assessment question: Is a developmental course developmental in its own progression? More specifically, the faculty asked the following questions: What does an eventual A student look like in the beginning of a developmental reading course? Are the assignments thoughtfully engineered so that an A student makes A grades throughout the semester? Does that eventual A student grow stronger as the course continues? If there are too many assignments and associated points delegated to the initial journey, what separates these A students from those who never become an A student?

Grade Compression

In 2012, less than 4 percent of the students who passed MVCC reading courses did so with a grade of A. Of the remaining successful students, 31 percent passed with a grade of B and 65 percent passed with a grade of C. In less than ten years, those numbers evolved to 18 percent As, 37 percent Bs, and 45 percent Cs. Although the working goal was to decompress the A, B, and C grades, the overall success rates—defined as students receiving an A, B, or C—rose from 40 percent to 60 percent from 2012 to 2022.

It is important to note that the developmental reading faculty agreed to address the compression of passing grades, but initially thought it unwise to challenge the threshold between Cs and Ds, effectively deferring the stickier question of appropriate rates of course success and progression (only students receiving grades of A, B, or C pass the course and progress in the sequence). The faculty expressed concerns that if they challenged the C and D threshold, they may inadvertently engage in grade inflation by simply raising all grades to generate more A students or otherwise pass underprepared students to the subsequent levels. Instead, they decided to investigate other aspects of their teaching practice to confront the grade compression challenge. They engaged three major strategies, some collaboratively as a unified team, and other strategies independently as an improvement of their craft: (1) final exam and test alignment, (2) focus on learning versus results, and (3) classroom learning supports.

The MVCC developmental reading program consists of three pre-transfer levels, RDG 041, RDG 071, and RDG 091 (see Figure 1), with the lowest level class (RDG 041) focused on the development of basic literacy skills for at-risk readers. Completion of RDG 091 opens the door for college-level coursework and is intended to provide a foundation for successful college reading.

Figure 1: MVCC Developmental Reading Sequence

Final Exam and Test Alignment

Perhaps the biggest departmental change involved a close look at the course finals. The initial grade compression data was used as a springboard to look at the point distribution on the course final exams. The faculty discovered that certain error types were more heavily weighed than others. After reflection, the faculty chose to redistribute points to avoid penalizing minor errors. This reflection included standardizing the grading on tests and redistributing points on the final exams. They also looked closely at the priorities within the program curriculum and aligned the final exam priorities accordingly. More points were awarded for correctly demonstrating a skill emphasized and practiced throughout the curriculum. The faculty next made efforts to norm the final exam grading process to award points more consistently on major skills. They developed norming sessions for faculty and rubrics to ensure similar point awards to students. Ultimately, the norming of the exam grading positively impacted test scores. As one faculty member reflected, “There was greater emphasis placed on the strategies used with challenging texts; the process was weighted more heavily than the end product.”

Focus on Learning Versus Results

The emphasis on process versus end product was shared among the faculty. In activities ranging from exams to class assignments, they began to scrutinize the points awarded for answering questions correctly versus students working through the learning process. Instead of assignments graded strictly on point values, faculty looked at effort, improvement, and positive behaviors, such as resource-seeking and classroom engagement, to supplement evaluations. A faculty member noted that they “needed to reevaluate how [they] were going to assess, grade, and support students in the process rather than [regarding] the end result of an assignment as being the bottom line.” These realizations and changes led to improved assignments and class activities with a more developmental and constructive application.

Classroom Learning Supports

One faculty member recognized that a large percentage of the final course grade rested on a single assignment that many students chose not to complete. As a result, she started to dedicate more time in class to the assignment, developed support materials, and substantially added to the structure of the assignment. The culminating assignment remained a major part of the overall grade, but the faculty member added more learning support to increase student success on the assignment and improve final grades. Another faculty member started to place more attention on helping students who missed assignments and leveraged technology tools to help students stay on track. By confronting the grade compression challenge and looking to improve grade distribution, the faculty members focused additional attention on improved teaching and classroom support practices.

Independently, all reading faculty members insisted that they never changed their grading practices; they did not pass students who did not earn passing grades. However, making changes to the final exam, awarding points for the learning process, and adding class learning supports resulted in increased student success rates. One faculty member noted, “Even though our original focus was the decompression of passing grades, it seems that all grades were impacted positively. I think students who were close to passing experienced greater opportunities for passing the course.” As the faculty began to envision a new process and look at grades differently, students were rewarded differently as well. This process also encouraged faculty to engage in more discussion and reflect on their colleagues’ suggestions and opinions as they moved along the journey.

Moving to Individual Grade Patterns and Hypothetical Discussions

Once the developmental reading faculty addressed the grade compression challenge, and subsequently raised overall student pass rates by 50 percent from 2012, they also began to think about their individual overall retention rates and the subsequent success of their own passing students as the students progressed to the next level in the reading course sequence. To achieve this second level of analysis, the faculty collaborated with the department chair and dean to create a theoretical metric as a tool to ground themselves and establish a shared, consistent baseline to discuss and understand student progression and success. The result was the hypothetical success rate (HSR), based on a composite formula (see Appendix A): Faculty multiplied their current overall average pass rate with the sum of the product of the average pass rates for each discrete passing grade and a hypothetical weighted indicator of future success corresponding with each passing grade type. This hypothetical indicator awarded a current A student an 85 percent chance of succeeding at the next level, a B student a 75 percent chance, and a C student a 60 percent chance. Using the HSR, the faculty could predict how many students who entered their courses would not only pass the current class, but would also pass the subsequent sequenced course.

Broken down into the components, the HRS provides useful data for evaluation that can be run on a course, individual faculty member, or program level. The HSR formula starts with the average pass rate (APR) for the given level of review. The APR is simply the average number of students who receive a passing grade over a given amount of time, such as a semester. The APR should then be broken into discrete pass rates for each of the passing grades. Just like the APR, the average A grades (AAG), average B grades (ABG), and average C grades (ACG) are comprised of the average number of students awarded an A, B, or C grade over the same given time as the APR. Each of these pass rates, the overall and grade specific, provides useful insight into the grade distribution and overall average success of students in the course or program. However, the ongoing success of students beyond the current course or program into the sequentially related course or program is also vital.

To begin examination of future success, the HRS next uses a probability-based hypothetical progression success calculation. The progression success is hypothetical both because future prediction is impossible and because it assigns a simplified success metric based on grades. The A-grade HSR (HSRa) is simply the AAG multiplied by the constant 0.85. The constant assumes that a student receiving an A grade has a high probability of success in the subsequent course or program, specifically an 85 percent chance. Likewise, a decreasing constant is applied to the ABG and ACG assuming that with each decrease in grade award, a student has a decreased probability of success in the following level: the ABG is multiplied by the constant 0.75 for the B-grade HSR (HSRb), and the ACG is multiplied by the constant 0.60 for the C-grade HSR (HRSc). Individually, these values give the faculty a good picture of how many students may be successful in the future courses based on the faculty member’s current instructional and grading practices.

By adding the discrete grade-award hypothetical success rates (HSRa, HSRb, and HSRc), the faculty members can calculate a composite progression success rate (PSR) based on the success probability assumptions used above. The PSR provides a quick and simplified data point on likely student success in the next sequential course. Faculty members, based on their current instructional and grading practices, calculate the hypothetical success of students as they progress along the sequence.

However, this forward-looking future success calculation (PSR) does not communicate the whole student success picture. By multiplying the PSR by the APR and dividing by 100, the faculty members calculate a ratio of students who are likely to be both successful in the current class and successful in passing the subsequent class. The composite HSR provides a simple and complete picture of likely success for students at both of the most important future points: success at the end of the semester (current class) and success at the end of the next semester (subsequent class).

While the HSR is unique to each faculty member, the consistent application of the calculation combined with the mathematical constants provides a valuable baseline for predicting student success and evaluating the relationship between faculty grading practices and ongoing student success. Table 1 illustrates the HSR calculations for 11 developmental education faculty members (from Developmental Math, English, and Reading) over the course of seven years. As seen in Table 1, using only the passing grades to calculate the initial hypothetical success rates inflates the PSR and gives the impression that predicted success rates are higher. However, the final calculation for the composite HSR recognizes that in a course sequence, the probability of continued success decreases in each level. The final composite HSR produces a useful baseline for comparing and discussing predicted student success at the next course levels.

Developmental Reading Program Results

With the HSR established, the developmental reading program continues the self-evaluation phase of grade compression review. The three faculty authors became particularly involved in the self-evaluation phase and agreed to share their results and learning from the process.

  • Susan Godwin is a community college veteran. She has professional experience in tutoring, adult education, and developmental education. She has been teaching developmental reading in community college for 28 years and has taught across the reading sequence.
  • Rita Ferriter taught junior high and high school students, eventually moving to teaching reading and study skills at the community college. With over 20 years of teaching experience, she focuses on the highest developmental reading levels, but also has substantial experience with the lowest-level readers.
  • Jennifer Lee-Good began her teaching career with first-grade students before teaching college-level students as a part-time faculty member. She now teaches the middle and highest developmental reading levels full-time and serves as the Reading Program Coordinator.

HSR Results

Table 2 provides the pass rates and HRS results for the developmental reading faculty for the past five years. When confronted with the data in Table 2, the faculty members recognized many of the initial concerns. As Godwin noted, “This was a red flag for us all and an opportunity to determine the changes that would have the biggest impact.” Specifically, the faculty members observed that while average pass rates were within 10 percentage points of each other, there was great variability between average grade awards. This difference spoke to disparity, inequity, and inconsistency within the program. These discrepancies helped prompt the review of shared tools, such as the final exam, to build greater unity. According to Godwin, “Students’ grades did not always reflect their ability, and some structural problems leading to this outcome were baked in.”

Godwin’s observation can be seen by the HRS results for Lee-Good. In Table 2, Lee-Good’s HSR shows a noticeably lower predicted success rate (43.29 percent) on the hypothetical success calculation compared with the other faculty. Also, none of her students earned A grades during this time period. As would be expected from the calculation, the lack of A grade students impacts the hypothetical success prediction by including only students with a lower success probability in the next course within the reading sequence. Lee-Good shared, “Seeing the absolute zero for given As compared to my colleagues was impactful.” She intuitively felt, and data confirmed, that her students learned and did well at the next levels, despite the concentration of grades. The review of the hypothetical success rate helped illustrate the compression of grades. Additionally, Lee-Good’s cause for alarm is important because it helped prompt a desire for additional information and a look at actual student success rates.

The Rubber Hits the Road: Actual Future Success Rates

After the developmental reading faculty members examined their collective and hypothetical grade patterns, they asked to see more individualized success patterns. To achieve this third, deeper, level of individualized perspective, the three faculty members worked with the dean to develop the actual future success rates (AFSR).

Like the HSR, the AFSR uses a composite calculation from the average pass rate, average grade assignment for each of the passing grade types, and a probability constant. However, unlike the HSR, which uses a generalized and hypothetical constant to assume success rates in the next level (0.85 for A, 0.75 for B, and 0.60 for C), the AFSR uses a constant, unique to each faculty member, based on the students’ actual successful completion rate in the next sequential course. To accomplish this, the program randomly selected and tracked 100 former students of each faculty member; the students had to have passed one level of reading and enrolled the following semester at the next level. With these individualized student data sets, all faculty members could see how their passing students performed at the next level of reading. The resulting actual next class pass rates became the mathematical constant to calculate the new AFSR (see Appendix B).

With the actual successful student completion rates as the constant, the AFSR creates a new predictive tool that more accurately represents the likely future success of students progressing from discrete faculty members. When used in conjunction with the HSR, the AFSR also helps demonstrate the impacts of grade compression and the influences of grades on the perceived success of students as they progress.

Table 3 presents the first calculations for the AFSR for Godwin, Ferriter, and Lee-Good. Included in the results are the AFSRa, AFSRb, and AFSRc. While the results vary widely among faculty members, Ferriter commented, “I am satisfied with the numbers and feel they reflect the hard work of the students.” However, she also observed that the AFSRc showed much lower success rates than she would have expected from C-grade students. Likewise, the new metric and data prompted program dialogue about the definition and meaning of A grades, department policies, and equity—all excellent conversations around improvement and student success. As Lee-Good reflected, “I began to consider student efficacy and the effect of an A or B on students prior to moving on to their last course in the sequence.”

Finally, Table 4 provides the composite AFSR results and the comparative HSR from Table 2. This table provides a valuable comparison between the two predictive success-rate calculations. As previously mentioned, Lee-Good’s HSR showed particularly low predictive success in the hypothetical calculations. However, Table 4 shows a turnaround with a noticeably higher AFSR at around 55 percent. The lack of A-grade students lowered the hypothetical success rates, but by using the actual student success rates, the predictive actual future success rate shows greater progress. This comparison helped clearly demonstrate the compression of grades which continue to present opportunities for program discussion, as instructors realized their grading patterns may be too stringent. As Lee-Good later shared, “I learned solid approaches and beliefs for grading, but they were very rigid and were all I knew. I can tell you that the process is ever evolving and required self-evaluation and critical thinking of all constructs and factors involved.”

Lasting Changes

The leadership and courage of the developmental reading faculty members to confront the program grades “set the groundwork,” said Ferriter, “for discussion and reflection about our practices. The metrics and data helped our discussion and influenced the changes that became the driving force for new success.” The HRS and AFSR metrics provided a baseline and consistent metric for observing grade compression patterns and the predicted impact on success as students’ progressed through the reading curriculum. While strictly predictive in nature, the tools set the stage for larger program conversation and ways to objectively reflect on individual practices. It is commendable that Ferriter, Godwin, and Lee-Good humbly questioned their craft and practices to address grade compression concerns. Further, when presented with the challenging data, the faculty members developed attributes, tested the attributes, and returned to the classroom to implement changes. By focusing on the efficacy and predictive capabilities of developmental grades, the faculty members will be able to align their efforts with increasingly stringent student retention and promotion goals at the college.

The lasting changes to the program not only increased student efficacy and awarded students more equitably for the learning, but also increased overall developmental reading student success rates. Based on this work, the entire Developmental Education department at MVCC will be engaged in a grade analysis. As the reading faculty members noted at the conclusion of the process, “Teaching is an art, and each student brings new dynamics that need to be molded, adjusted, worked with, and supported.” Artists need the correct tools, a critical eye, and a willingness to start over to truly make a lasting impression. The HSR and AFSR are two additional tools for the reflective and analytic faculty members to employ for critical review of their courses and building better student success experiences.

References

Dolainski, S., & Griffin, S. E. (2011). Words to learn by: Expanding academic vocabulary. McGraw-Hill.

Kuhn, T. S. (1962). The structure of scientific revolution. University of Chicago Press.

Mather, P., & McCarthy, R. (2016). Reading and all that jazz (6th ed.). McGraw-Hill.

Storer, N. W. (1967). The hard sciences and the soft: some sociological observations. Bulletin of the Medical Library Association, 55(1), 75–84.

Vacca, R. T. (1997). The benign neglect of adolecent literacy. Reading Today: A Bimonthly Newspaper of the International Reading Association, 14(4), 3.

Vacca, R. T., & Vacca, J. A. L. (2008). Content area Reading: Literacy and learning across the curriculum (9th ed.). Pearson Education.

Grant J. Matthews is Associate Vice President, CTE and Workforce, at Lane Community College in Eugene, Oregon. Rita Ferriter, Ed.D., is Reading Faculty, Developmental Education; Susan Godwin is Reading Faculty, Developmental Education; Jennifer Lee-Good, Ed.D., is Coordinator, Reading Program; and Michael Morsches is Dean, Learning Enrichment and College Readiness, at Moraine Valley Community College in Palos Hills, Illinois.

Opinions expressed in Learning Abstracts are those of the author(s) and do not necessarily reflect those of the League for Innovation in the Community College