Institutional Effectiveness: From Intuition to Evidence

Barbara Gellman-Danley and Eric V. Martin

Many will recall the term “community college movement,” demonstrating the force behind the birth of two-year colleges and the passion that continues to fuel the sector to include its laser focus on student success. The mantra “community is our middle name” further speaks to the centrality of community colleges, emphasizing the importance of partnerships and regional engagement in sustaining their unique mission. With its focus on students and community, a sector once seen as the only alternative for those who may not get into a four-year institution has become one of choice. The evolving purpose and growing importance of community colleges thus necessitates, now more than ever, measurable outcomes of success for learners and for the institutions themselves.

In light of this central importance, public and government officials alike are increasingly asking for institutions to demonstrate their effectiveness and that such institutional effectiveness (IE) be rooted in evidence and not mere intuition. Community colleges are working together to meet this demand, as they always have, despite the rising challenges to American higher education. From the learner-centered college to Achieving the Dream (ATD), Guided Pathways, the Completion Agenda, and the Voluntary Framework of Accountability (VFA), the nation’s community colleges are taking the journey toward IE seriously and transforming themselves in the process. The complexities of this journey bear further analysis.

This article undertakes such analysis with several ambitious aims. Specifically, it will describe the rise of IE for regional accrediting agencies and community colleges alike, convey IE success stories and real-world challenges identified in a survey of Higher Learning Commission’s community colleges, and offer recommendations to inform next steps on the road to institutional effectiveness.

From Quality Improvement to Institutional Effectiveness

While it may not have been named at that time, the goal of continuous improvement inspired early community college leaders and their successors. In recent years, a shift from quality improvement to institutional effectiveness on community college campuses has become evident as astute institutional leaders and their trustees—often leaders from industry and healthcare—are versed in theories of continuous quality improvement (CQI). These leaders seek to move beyond CQI (an often-amorphous activity) to specific measures of effectiveness such that quality is demonstrably improved and open to external validation. This same movement from the broader concept of quality improvement to the specific activities of IE has been underway among the regional accreditors, stemming from the historical underpinning of regional accreditation and in the accreditors’ current criteria and standards.

Historical Progression and Current Standards

The assurance of quality has been the focus of regional accreditation for more than a century. According to Ewell (2008), during the originating years of regional accreditation, the focus among elite institutions was to “recognize and distinguish colleges of quality” through a process of peer affirmation, as opposed to modern day peer review (p. 21). This early approach defined regional accreditation for many years until it began to be replaced in 1913 by strict quantitative measures of quality in the form of the institution’s enrollment, number of departments, faculty qualifications, and number of volumes in the library (Ewell, 2008).

In 1934, following a series of what they called experiments, the North Central Association (NCA, now known as the Higher Learning Commission) adopted an entirely new approach to accreditation in response to the rise and increasing diversification of institutions of higher learning (Ewell, 2008). According to Davis (as cited in Ewell, 2008), through its new approach toward accreditation, the NCA recognized that “an institution should be judged in terms of the purposes it seeks to serve” (p. 31).

This emergence, as Ewell (2008) explains, created the first mission-based approach toward accreditation where the holistic judgment of institutional functions across varying institutional contexts replaced strict adherence to the previous quantitative measures of quality applied uniformly to all institutions. He summarizes, “From 1934 onward, in what would eventually become the dominant posture for the regionals, the process of accreditation was intended primarily to help colleges and universities improve” (Ewell, 2008, p. 31).

Today, the clearest indication of regional accreditors’ commitment to institutional quality and visible progression toward institutional effectiveness resides in their respective criteria and standards. A complete list of these indicators is beyond the scope of this periodical, but the following illustrate this point:

  • Higher Learning Commission (HLC), Subcomponent 5.D.2.: “The institution learns from its operational experience and applies that learning to improve its institutional effectiveness, capabilities, and sustainability, overall and in its component parts” (2014, para. 32).
  • Middle States Commission on Higher Education, Standard VI: “An accredited institution possesses and demonstrates the following attributes or activities…periodic assessment of the effectiveness of planning, resource allocation, institutional renewal processes, and availability of resources” (2015, p. 12).
  • New England Association of Schools and Colleges, Standard 2.2: “The institution systematically collects and uses data necessary to support its planning efforts and to enhance institutional effectiveness” (2016, para. 18).
  • Northwest Commission on Colleges and Universities, Standard 4.A.1.: “The institution engages in ongoing systematic collection and analysis of meaningful, assessable, and verifiable data . . . as the basis for evaluating the accomplishment of its core theme objectives” (2017, p. 35).
  • Southern Association of Colleges and Schools Commission on Colleges, section 7.1: “The institution engages in ongoing, comprehensive, and integrated research-based planning and evaluation processes that (a) focus on institutional quality and effectiveness and (b) incorporate a systematic review of institutional goals and outcomes consistent with its mission” (2017, p. 19).
  • Western Association of Schools and Colleges, Standard 4: “The institution considers the changing environment of higher education in envisioning its future. These activities inform both institutional planning and systematic evaluations of educational effectiveness” (2013, para. 1).

The requirements of the Accrediting Commission for Community and Junior Colleges (ACCJC) are particularly instructive as the only accreditor working exclusively with community and junior colleges. Its requirements related to quality improvement and institutional effectiveness are conveyed in Standard I.B.9.: “The institution engages in continuous, broad based, systematic evaluation and planning. [It] integrates program review, planning, and resource allocation into a comprehensive process that leads to accomplishment of its mission and improvement of institutional effectiveness and academic quality” (Accrediting Commission for Community and Junior Colleges, 2014, p. 3).

Later, in standard IV.B.1., ACCJC underscores the role of the CEO regarding institutional effectiveness when it states, “The CEO provides effective leadership in planning, organizing, budgeting, selecting and developing personnel, and assessing institutional effectiveness” (2014, p. 15).

HLC Survey of Community Colleges on Institutional Effectiveness

The similarity of the regional accreditors’ standards related to institutional effectiveness signal the very form of unity which they are often said to lack. That unity is mirrored in the community college sector, where institutions share a commitment and approach to improving and demonstrating their effectiveness; like the accreditors, however, there is not necessarily one way to measure effectiveness. The question that remains today is, how, if at all, can institutional effectiveness be measured in ways that support the mission of community colleges and the concomitant increasing demands of the communities they serve? In order to explore this question, the Higher Learning Commission formulated a member survey in the summer of 2018. HLC emailed 858 representatives at two-year institutions within its 19-state region and received 115 completed surveys (a response rate of 13.4 percent) from presidents, provosts, academic deans, and those responsible for institutional effectiveness.


Survey respondents were first asked, “Has institutional effectiveness been appropriately defined in the higher education community?” The revelation that only 20 percent agreed that there was such an understanding does not come as a surprise. A resounding 41.7 percent said “no,” and the institutional representatives provided several reasons that will be referenced below. Another 28.7 percent responded “maybe,” and the other 9.57 percent stated they simply did not know. As with accreditors, community college personnel know quality assurance when they see it, and they are quick to point out when it is failing. The same applies to institutional effectiveness. One explanation about the lack of clarity in a shared definition is given by Seymour and Bourgeois (2018) in the Institutional Effectiveness Fieldbook:

The challenge of defining institutional effectiveness is largely due to its not being a smaller, more circumscribed idea. Instead, IE is a boundary-buster that infringes on organizational turf and defies the pigeonholing that makes organization life easier for many individuals. But that more robust, cross-functional aspect is also what makes it a powerful force for creating coherence and driving positive change. (p. 11)

External perceptions and political challenges add further complication. For example, the nation’s community colleges are frequently challenged on completion rates as a means of demonstrating institutional effectiveness. Yet it is difficult not to be defensive sitting in front of a congressional committee when members quote the low completion rate of 10 percent for community college students, despite the fact that context and limited data might present a very different story.

Bailey, Jaggars, and Jenkins (2015) depict the challenges well, stating that “while these colleges have helped educate millions, it is also true that many, probably a majority, of students who enter higher education through a community college do not achieve their long-term educational objectives” (p. vi). They point out that despite this, they found

no deficiencies in the enthusiasm, dedication, or skill of the faculty and staff of community colleges, but rather observe problems in the structure of the colleges and of the overall system of higher education – a structure that may have served this country well in the 1960s and 1970s when community colleges were a core part of the nation’s effort to dramatically expand access to higher education, but which is not well suited to the needs and challenges they now confront. (p. vii)

Accepting these challenges, HLC’s survey asked respondents for their own definition of IE, ways to measure it, and best practices. Examples of these definitions and responses follow:

  • “The extent to which an institution has vision, processes, procedures, and outcomes to ensure student success and continuous improvement” (Arapahoe Community College, CO).
  • “Using data and non-data methods to explore an institution’s strengths and weaknesses and then develop strategies to improve. Consistent evaluation and use of results for improvement define institutional effectiveness” (Clovis Community College, NM).
  • “Institutional effectiveness supports a culture of inquiry, evidence, accountability, and continuous improvement related to all college functions and activities” (Gateway Technical College, WI).
  • “Institutional effectiveness is the degree to which the organization is achieving its mission-based objectives” (Jackson College, MI).
  • “An organization’s effectiveness is a product of its collective self-awareness, its propensity for self-evaluation and self-reflection, and its obsession with achieving excellent results. For an institution to be truly effective, every single employee needs to be committed to actively improving performance” (Kankakee Community College, IL).
  • “A systemic approach and organizational structure which provides meaningful information to the right stakeholders at the right time, guiding strategic direction and resource allocation to achieve high-quality institutional outcomes” (Rio Salado College, Maricopa County Community College District, AZ).
  • “IE escapes definition until the institution type is clearly defined. Generally, IE is defined as meeting the tenets for why the entity was created in the first place. IE measures the success of aligning the institution’s mission and goals in respect to the founding tenets” (Northeast Community College, NE).
  • “Institutional effectiveness is the result of the college reaching or exceeding its measurable goals and objectives as defined and informed by the college’s mission and vision and external legal and accreditation expectations” (Northwest Arkansas Community College, AR).
  • “With decreased state funding of community colleges, as is seen in Illinois, an effective institution would be one that just stays open” (Sauk Valley Community College, IL).
  • “We believe that institutional effectiveness is a journey, a very exciting, and very rewarding journey” (Southeast Technical Institute, SD).

One respondent to HLC’s survey perhaps conveyed best the need for a clear definition and commonly understood framework for IE when he stated, “I run an IR/IE department, and I can’t explain it in a way that the intelligent layman would understand.” Yet, even against this stark backdrop, widespread belief in the central importance of IE and commitment to its success, particularly among institutional leaders, will likely ensure its long-term success. Other research has revealed the necessary groundwork taking shape.

The commitment of institutional leaders to the success of IE has been documented by Bartolini, Knight, Carrigan, Fujidea, LaFleur, and Lyddon (2016) in a study in which they interviewed twelve leaders, including current presidents, presidents emeriti, a chancellor, and a provost, all of whom represented a variety of institutional types (p. 4). Although the interviewees varied in terms of the adoption of IE on their respective campuses, all agreed on its importance. Among their results, Bartolini et al. (2016) found,

Participants identified advantages of the IE model as improved effectiveness and efficiency of decision making; improved institutional accountability and ability to establish priorities; the ability to carry out benchmarking and identify best practices; greater timeliness, accuracy and richness of evidence; durability of decision support processes; better connection of people and systems; heightened ability to focus on student success; and the potential to influence policy. The only disadvantage cited was the difficulty in identifying candidates for the chief institutional effectiveness officer position who possess the necessary skill set. (p. 4)

As seen in the earlier list, respondents to HLC’s survey voiced similar support for IE. At the same time, they provided additional insight related to the confusion about IE, its location at a college, and required skill sets:

  • “I meet a lot of people with IE in their titles, and we all do a wide variety of different tasks, which leads me to believe that IE is more of a catch-all function than a clearly-defined specialty” (Gateway Technical College, WI).
  • “I think this term means many things to many different groups. There is no cohesive meaning across all higher education sectors/institutions” (Hutchinson Community College, KS).
  • “It’s a great term, but it seems to have different meanings for different people. Some measure effectiveness by processes, and the extent to which those processes are mapped. Some measure effectiveness by student achievement in a particular course or program” (Iowa Lakes Community College, IA).
  • “It seems that each institution has come up with its own definition” (Pikes Peak Community College, CO).
  • “There are many measures to evaluate the effectiveness of a college or university. While scholars have pondered the question, a unified definition seems to continue to escape us” (University of Cincinnati, Clermont College, OH).

Even with such core considerations unresolved, the community colleges that responded to the survey suggested a variety of current practices and ideas firmly rooted in IE. The ideas fell into five categories: (1) the critical role of leadership in advancing IE and the organizational culture; (2) integrating IE into strategic planning priorities and mission; (3) the impact of accreditation and the resultant alignment of quality assurance with IE, and focusing on that agenda after being inspired by accreditation requirements; (4) organizational structure, reengineering the community college to create a centralized office of IE and assuring participation across the college; and (5) externally initiated programs that support and influence institutional effectiveness programs and measures.

Leadership and Organizational Culture

Several survey responses focused on the essential role of leadership in moving toward and sustaining IE and improvement activities on a campus:

  • “We have started the concept of a working strategic plan that included specific targets and key performance indicators that are measurable. The plan is discussed annually with the leadership team, and data are presented for each item. This allows for a structured, data-driven approach to institutional planning. It helps to focus the leaders of the institution on the strategic goals and allows for progress toward these goals to be examined annually” (Connors State College, OK).
  • “The new president brought CQI principles and created KPIs and dashboards. Things expanded from there with metrics in strategic planning, development of a Strategic Enrollment Management (SEM) plan, and program review processes. Many of these were being done for reporting purposes, but the institution did not have the focus” (McHenry County College, IL).
  • “One element of our institutional effectiveness involves student completion. To that end, we have established 90/80/70 goals--i.e., 90 percent student fall-winter persistence; 80 percent student fall-fall persistence, and 70 percent student achievement of a credential of market value, transfer, or employment. The idea was a collective one emanating from the president, board, and leadership council that resulted in our “Total Commitment to Student Success” (TCS2) which governs all of our work” (Jackson College, MI).
  • “As a newly appointed president, I was concerned about the college’s graduation and retention rates. So, we began a college-wide goal to improve both categories. An institutional effectiveness team was formed that is comprised of faculty, administrators, professional support, classified staff, and the director of institutional research. The board of trustees is updated monthly on the progress. We benchmarked the data and began to identify issues and barriers that currently existed. Our graduation rate in 2012-13 was 22 percent and in 2015-2016 increased to 51 percent” (Spoon River College, IL).

Strategic Planning and Mission

Many survey responses linked planning and mission to IE:

  • “When reviewing our last strategic planning cycle in the spring of 2014, we determined that based on our assessment cycle we needed to establish and measure and ultimately improve our institutional KPIs that flowed down to the department, program, class, and individual levels” (Carl Sandburg College, IL).
  • “In 2011, as part of the strategic planning process, the institution recognized an opportunity to improve academic advising services. The college turned the opportunity into a strategic priority” (Pikes Peak Community College, CO).
  • “The effectiveness of an institution is based upon its ability to execute its mission. The implementation and monitoring has [sic] taken place in the measurement of the key outcomes defined by the mission” (Dunwoody College of Technology, MN).
  • “Faculty training, focus groups, and team meetings were part of the implementation process to insure understanding, feedback, and buy-in. The power of the Strategic Improvement Process (SIP) indicators was the linkage to key measures on the strategic plan making the connection between performance at the classroom level to the college level very clear and important to the entire college community and ultimately to individual performance. The (SIP) plans are key elements of the Fox Valley’s Strategic Alignment Model” (Fox Valley Technical College, WI).

The Impact of Accreditation and Quality Assurance

Institutional responses highlighted linkages between their IE efforts and accreditation activities to include HLC’s elective programs (the Assessment Academy and the Persistence and Completion Academy, now known as the Student Success Academy) as well as the Academic Quality Improvement Program (AQIP)—an accreditation pathway that will soon sunset after nearly two decades of fostering institutional success. In all cases, a culture of evidence” validates effectiveness:

  • “Based on HLC visiting team recommendations, we are changing from an Institutional Research model to an Institutional Effectiveness model. We modified our Operational Plans to include ways of measuring our success from simply reporting the results to rating each element's effectiveness. These ratings culminate in scores which can be compared to find areas of weakness upon which to improve” (Labette Community College, KS).
  • “HLC’s Assessment Academy brought us together with other schools facing the same challenges – this is the real support – making connections with other schools and developing relationships where we support one another” (Lac Courte Oreilles Ojibwa Community College, WI).
  • “The work evolved out of accreditation requirements, the IR function, and the need for integrated planning and alignment between the executive leadership at the college and the functional areas of the college. Our institutional definition encompasses IR, assessment, compliance, accreditation, grants, analytics, and development. We worked with all deans and leadership faculty. The president has supported the direction and funded these roles consistently” (Rio Salado College, Maricopa County Community College District, AZ).
  • “To address issues of retention and completion, our school established a Center for Student Success in 2014. All outreach and services to students experiencing difficulties are coordinated. Each student is assigned a Student Success Coach. Data are used to ascertain the effectiveness of individual strategies. Implementation was driven by our involvement in the HLC Persistence & Completion Academy. Our retention increased by 7 percentage points in one year to an all-time high of 82 percent” (Mitchell Technical Institute, SD).
  • “In 2009, one of Moraine Valley’s AQIP Action Project teams developed and implemented the college’s Continuous Improvement Objective Results (CIOR) process designed to connect planning, budgeting, measuring effectiveness, and reporting on how results were used to continuously improve processes. In 2016, a second AQIP Action Project team developed and implemented a new planning process entitled Plan, Evaluate, and Improve (PIE) to replace the CIOR process. This is the team that developed the college’s definition of institutional effectiveness. In January 2017, the college also formed a team charged with defining Strategic Priority Indicators (SPIs). These institutional-level SPIs serve as a means of tracking the college’s goal of becoming a more data-informed decision-making entity. The PIE process has now been institutionalized and is again being used for FY19 planning, and departments were encouraged to use SPIs data, as appropriate, when developing their FY19 PIEs” (Moraine Valley Community College, IL).

Organizational Reengineering

In this category, respondents commented on the organizational and structural changes stemming from the infusion of IE processes and activities. Many noted the importance of reengineering to assure success:

  • “Gateway created an integrated Program Effectiveness (PE) function within our Institutional Effectiveness division in 2013-2014. In response to faculty confusion over the requirements to measure student learning, maintain up-to-date curriculum, and participate in program review, we organized a team of staff who worked on those functions and designed a single process with integrated deadlines and common forms to clarify faculty's responsibilities. Each program also has a faculty PE chair with assigned hours to gather data and communicate with our office. Since we've started the process, we've maintained 100 percent participation in assessment of student learning” (Gateway Technical College, WI).
  • “An Institutional Effectiveness Office has been designated to include institutional research, accreditation, and assessment for student learning. More recently assessment in non-academic programs is being implemented. This office is now part of the VP for Student Affairs administrative unit and reports directly to the president. This approach has worked effectively for creating an awareness of institutional effectiveness and a focus on improvement” (North Dakota State College of Science, ND).
  • “About four years ago, our institution decided that we needed to create an official office for Institutional Effectiveness so that increased attention could be directed to the ever-increasing data needs including federal and state reporting. The Associate Vice President for Institutional Effectiveness is now also the AVP for Academic Affairs and has a full-time staff member to assist with data analysis. Locating the office under Academic Affairs allows for more visibility and collaboration” (Rose State College, OK).
  • “The integrated and functional approach of IE professionals is a powerful agency for change at any college. Starting with an organizational structure to advance strategic outcomes has been a great learning experience for our college” (Rio Salado College, Maricopa County Community College District, AZ).

Externally Initiated Programs

A range of responses touched upon other externally initiated programs such as Achieving the Dream (ATD), Guided Pathways, the College Completion Agenda, Complete College America (CCA), or simply placing IE functions on a campus under the umbrella of performance-based funding in states. Achieving the Dream will be discussed first, followed by survey responses.

In 2004, the Lumina Foundation launched Achieving the Dream: Community Colleges Count. This initiative was explicitly designed to improve outcomes, including helping academically underprepared students succeed in college-level work, increasing semester-to-semester persistence, and improving rates of degree completion (Bailey, Jaggars, and Jenkins, 2015, p. 7). Bailey et al. (2015) explain,

The experience of the ATD initiative during its first five to seven years provides a good picture of the state of community college reform in the first decade of the twenty-first century, just as the completion agenda took hold. The initiative was explicitly designed to increase the academic success of community college students by building a “culture of evidence” in which administrators, faculty, and others would use data to identify barriers to student success and develop reform strategies to overcome those identified barriers. The initiative assigned experienced administrators – usually retired community college presidents –as coaches for the participating colleges. (p. 8)

Accreditors require a culture of evidence versus the old “I know quality when I see it” intuitive approach. Guided Pathways is another way student success rates can improve. The Guided Pathways approach to redesign starts with students’ end goals in mind, and then rethinks and redesigns programs and support services to enable students to achieve those goals (Bailey, Jaggars, and Jenkins, 2015, p. 16). HLC survey respondents noted these approaches and other external support ideas available to them:

  • “We carefully evaluate all student success programs. This began out of Achieving the Dream work and has continued in our Pathways and other student success programs. Success programs have an evaluation plan developed by the program leaders in collaboration with the Office of Institutional Effectiveness (OIE), and OIE provides templates to develop the plan and metrics to support the evaluation. We had a great deal of initial support in developing this from our ATD data coach” (Columbus State Community College, OH).
  • “We took a great deal of time to collaboratively develop a Strategic Enrollment Management (SEM) plan in our Continuous Quality Improvement process. We have followed Complete College America strategies and have developed a SEM plan with assistance from excellent consultants” (Central Wyoming College, WY).
  • “The Completion agenda has changed the measure for IE. Ohio’s 100 percent performance-based funding provides IE metrics” (Clark State Community College, OH).
  • “Our student success data (retention, graduation, transfer) was not getting better so the president formed a student success task force. That ultimately led to participation in the AACC Pathways program and implementation of several Pathways initiatives” (Front Range Community College, CO).
  • “We used our student outcomes assessment system as a guide, making our purpose statements into outcomes to be assessed. We then unified all the data gathering methods (internal, external, direct measures, indirect measures) into purpose statements – putting it all together in a dashboard. Folks are using the same data as we are – IPEDs, Noel-Levitz, community college benchmarking project, the Kansas study, etc.” (Neosho County Community College, KS).
  • “We adoped the Degree Qualifications Profile (DQP) in 2011 and have been working to incorporate its outcomes into all of our courses and programs. We have been collecting assessment data on these efforts and can point to attainment of the DQP’s outcomes as a measure of effectiveness” (North Central Michigan College, MI).
  • “Our effort to improve academic advising was successful as the 2015 CCSSE survey results indicated a statistically significatnt increase in usage of and satisfaction with academic advising” (Pikes Peak Community College, CO).

The above examples reflect multi-state collaboration among community colleges, often provided with external financial support. Many of the programs encourage a cohort approach, establishing important networking and shared goals, a great asset to the participating community colleges and those watching the results.

The final question of the survey asked participants if they were members of yet another nationwide initative, the Voluntary Framework of Accountability (VFA), designed by the American Association of Community Colleges (AACC). Nearly one-third of respondents were members. Some benefits of participation include the development of pathways, revising advising systems, and benchmarking against other institutions. Completion and transfer data lead to more comprehensive and reliable figures than some federeal databases.

Institutions participating in the VFA quickly learn that it tells the full story of their organization, including offering substantive insight into its successes and challenges. Regarding the latter, institutions are then able to identify the core areas where quality improvement initiatives must focus.

In spite of its success, the number of institutions participating in the VFA remains low. Ideally, IE efforts across community college campuses will boost participation in the VFA, which in turn will maximize available data and deepen the growing portfolio of information to benefit other institutions. As is discussed in the next section, regional accrediting agencies are trying to assist in the overarching effort to amass useful data through collaborative research of their own that is focused on graduation rates and student success.

A Survey of Institutions by Regional Accrediting Agencies

Whereas HLC’s survey of member institutions provides examples of the changing winds of higher education and the obstacles that stand in the way of measuring institutional effectiveness, it did not address specific challenges related to student success and graduation rates. Suffice it to say, both are of central importance to regional accreditors and the institutions they recognize.

For this reason, the Council of Regional Accrediting Commissions (C-RAC) mobilized in 2018 to study the graduation rates of a cross-section of institutions and, especially important, the efforts of the institutions to increase their graduation rates even if, in some cases, those rates were already high. C-RAC’s report, A One-Year Review of the C-RAC’s Graduation Rate Information Project, bears mention in this article because it can both align with and further inform the work being completed in elective programs such as those referenced above. Moreover, C-RAC’s study will be of interest to IE leaders on individual campuses who are taking on the challenge of low graduation rates.

Although the detailed findings of the C-RAC study are beyond the scope of this article, the following bears mention. The C-RAC study (2018) found that all institutions are seeking to improve graduation rates by creating policies and procedures to address key issues such as time to degree and stackable credentials; building programs, structures, and personnel to support student success through mandatory advising, summer bridge programs, and one-stop student support offices; and using software/technology to establish early warning systems for at-risk students (p. 26). Most will agree that the success of these efforts at any institution will bear a strong correlation to the success of the institution’s overall IE activity.

Observations and Recommendations

As seen throughout this article, many strong initiatives are underway that are intended to help institutions improve their own effectiveness with the common goal of improving student success. Both the HLC Survey and the C-RAC graduation rate study are modest means toward this same end. A particularly telling response to the HLC survey noted that,

Each year we spend more time and money on measures and I’m not certain measuring what we currently measure changes much. In the meantime, many students do great in spite of us, but other students leave worse than when they came. (Lamar Community College, CO)

While many respondents agree that there is a lack of clarity about the meaning of student success, and how institutions must innovate to improve their effectiveness in this area, the commitment to success at community colleges remains unquestionable. Based upon the wide range of inputs amassed for this narrative, here are six recommendations to help community colleges become more effective in meeting their goals:

  1. Definition: Work collaboratively across the institution to agree upon a definition of institutional effectiveness and how it will be measured. Be certain all levels of the college and its stakeholders are aware of this definition and that all departments have an IE plan with measurable metrics. Include staff from academic, student support, administrative, and all other units of the college in plan creation. Submit reports on outcomes established each year, including plans for continuous improvement.
  2. Leadership and Organizational Culture: Be certain that institutional effectiveness is a priority for the CEO and governing board. Make IE part of the presidential review as well as the board’s annual self-assessment. Demonstrate administrative and board support and intentional leadership in advancing the IE agenda.
  3. Strategic Planning and Mission: Integrate IE into all strategic plans, including those of the institution, each department, and for individual employees. Tie the goals to metrics, including the budget. Assure that all goals align to the college’s mission.
  4. The Impact of Accreditation: Educate everyone at the institution about accreditation standards, criteria, and expectations, both regional and specialized, when appropriate. Shift any thinking from accreditation as a mandatory requirement to a source for quality assurance and thought leadership. Work with accreditors to align IE initiatives. Participate in opportunities for learning from accreditors, including annual conferences, training opportunities, and participation as peer reviewers. Participation in peer review enables a win-win for the college in that participants are giving back to the higher education community and learning a great deal for their own institutions.
  5. Organizational Reengineering: Identify organizational structures to elevate IE as a priority. Hire individuals with training, experience, and skills in IE. Carefully consider the reporting structure to position IE as an institutional priority.
  6. Externally Initiated Programs: Participate in national initiatives and research available to strengthen IE for community colleges. Create a team of IE ambassadors to share knowledge from those programs as they are integrated across the college. Share success stories that might be leveraged into new initiatives across the country.

Community colleges are facing many challenges, including the complex issue of IE and moving from intuition about success to a culture of evidence. While there are great variances across colleges, much can be learned by working collaboratively. Taking ownership of the IE agenda and outcomes is critical. If colleges do not do this well, external stakeholders will impose their own standards. Embrace the excellent examples and success stories available across the sector, recognizing that the community college system in America is a unique and critically important part of higher education. It is, indeed, a journey.


Accrediting Commission for Community and Junior Colleges. (2014). Accreditation standards. Retrieved from

Bailey, T. R., Jaggars, S. S., & Jenkins, D. (2015). Redesigning America’s community college: A clear path to student success. Cambridge, MA: Harvard University Press.

Bartolini, B., Knight, W., Carrigan, S., Fujieda, E., LaFleur, M., & Lyddon, J. (2016). Presidential perspectives on advancing the institutional effectiveness model. Retrieved from

Council of Regional Accrediting Commissions. (2018). A one-year review of the C-RAC’s graduation rate information project. Retrieved from

Ewell, P. (2008). U.S. accreditation and the future of quality assurance. Washington, DC: Council for Higher Education Accreditation.

Higher Learning Commission. (2014). Criteria for accreditation. Retrieved from

Middle States Commission on Higher Education. (2015). Standards for accreditation and requirements of affiliation. Retrieved from

New England Association of Schools and Colleges. (2016). Standards for accreditation. Retrieved from

Northwest Commission on Colleges and Universities. (2017). Accreditation handbook. Retrieved from

Seymour, D., & Bourgeois, M. (2018). Institutional effectiveness fieldbook: Creating coherence in colleges and universities. Santa Barbara, CA: Olive Press Publishing.

Southern Association of Colleges and Schools Commission on Colleges. (2017). The principles of accreditation: Foundations for quality enhancement. Retrieved from

Western Association of Schools and Colleges. (2013). 2013 handbook of accreditation revised. Retrieved from

This issue of Insights is a chapter from 13 Ideas that Are Transforming the Community College World, edited by Terry U. O’Banion. This book was published by Rowman & Littlefield and the American Association of Community Colleges in March 2019. All rights reserved. When ordering the book from Rowman & Littlefield, mention the special code “RLEGEN18” and receive a 20% discount. Copies can also be ordered from Amazon, Barnes & Noble, and other booksellers.

Barbara Gellman-Danley is President and Eric V. Martin is Vice President and Chief of Staff, Higher Learning Commission, Chicago, Illinois.

Opinions expressed in Insights are those of the author(s) and do not necessarily reflect those of the League for Innovation in the Community College.