July 2008, Volume 9, Number 7
League Services Announces New Topics
Top 14 Strategies for Evaluating Teaching.A virtual smorgasbord of data sources awaits you in this session. Student ratings are a necessary, but insufficient, source to measure teaching effectiveness. As a professor or administrator, how many other sources can you name? How many are being used in your department? Well, this is your lucky day. This state-of the-art session is a fun-filled romp through 14 potential sources of evidence that are described in the faculty evaluation literature: (1) student ratings, (2) peer ratings, (3) external expert ratings, (4) self-ratings, (5) alumni ratings, (6) employer ratings, (7) administrator ratings, (8) videos, (9) student interviews, (10) mentor’s advice, (11) teaching scholarships, (12) teaching awards, (13) teaching portfolios, and (14) learning outcome measures. These sources are presented in the context of the 360 multisource assessment models used in management and industry for more than 40 years (a.k.a. “whirling dervish” approach to faculty evaluation) and most recently in medicine and healthcare. They can be used as models for accreditation self-study. Multiple sources of evidence are used to provide a more accurate, reliable, fair, and equitable base for formative (teaching improvement) and summative (annual contract renewal, merit pay, promotion, and tenure) decisions than any single source. This triangulation of sources is recommended in view of the complexity of measuring the act of teaching and the fallibility of all tools currently being used. This topic is available as a 1–1.5 hour keynote or 3-hour workshop.
Designing Rating Scales to Evaluate Teaching Effectiveness. What is the quality of the instruments you now use at your college to evaluate teaching? You’re not alone. The problem is that flawed, inappropriate, and insensitive items or incorrectly structured scales measuring an instructor’s classroom behaviors are all too common in academia. They can result in poor and biased ratings of faculty and unfair and inequitable decisions about contract renewal, merit pay, and promotion. Faculty careers are on the line. Whether you need to select, adapt, critique, or write items for a rating scale, you should know the criteria for quality items. This workshop covers (1) the step-by-step procedures for constructing rating scales; (2) the most common mistakes in item writing; (3) applying rules to the scales you are now using; (4) the different anchor structures and the rules to determine the number and format of anchors; (5) applying anchor options to your scales; (6) the steps for assembling a scale into a form ready for administration; and (7) applying steps to your scales so they are ready to blast off. The scales brought into the workshop should be significantly improved so they can be brought before the entire faculty. Further, workshop participants gain the scale construction skills necessary to spearhead other evaluation projects, such as peer observation, self-ratings, alumni ratings, and student interviews. Time is also devoted to technical issues, including reliability, validity, and scale score interpretation. This is a workshop you can’t afford to miss. This topic is available as a 3-hour workshop.
Paper-Based Versus Online Administration of Student Scales. Online administration of student rating forms has been considered and often rejected by institutions of higher education because of faculty’s preconceived notions of decreased response rates, increased rating bias, and lower ratings than paper-based administration. Research and practice over the past five years have addressed these concerns and other deterrents to adoption of online administration. This workshop critically compares the two modalities according to 15 key factors. Special attention is devoted to online issues, such as response rates, administration time, standardization, accessibility, convenience, turnaround time, anonymity and confidentiality, and cost. The comparability of paper-based and online ratings is also examined in terms of the threats of response and nonresponse biases and the structured and unstructured item formats. Participants are able to assess the feasibility of addressing these issues at their institution. A variety of available software, such as WebCT, TestPilot, and Snap, are also reviewed. After weighing all of the pluses and minuses, conclusions will be discussed regarding your institution’s conversion to an online system to administer student ratings. This topic is available as a 1-hour workshop.
To find out more, email Ed Leach or call (480) 705-8200, x233.
Copyright © 1995 - League for Innovation in the Community College. All rights reserved.