Research Staff

Sunday, June 24, 2001

 

Facilitator: Bob Barr

Recorder: Chris Barkley

 

Outcomes

 

·         New ideas for contributing to how researchers can create/lead in learning college.

·         ID ways to collaborate to achieve

·         Ideas for roles/offices changes to contribute better

 

4 questions we are to address:

 

1)     Of the major issues discussed this morning, what are most important for us to address in this session?

2)     What strategies can we as researchers share or create to help in building learning community?

3)     How are we doing as facilitators of learning in our roles as researchers?

4)     How do our roles need to be changed so that we can better lead and help college become more learning centered?

 

Key our discussion to morning’s critical issues

·         Learning outcomes (defining learning outcomes)

·         Technology

·         Recruiting faculty and staff

·         Engaging students in learning-centered education (student goals, orientations, systems/practices)

·         Organizational Culture (theory—which comes first: change behaviors or beliefs)

 

Our rank order

·         Defining learning outcomes=#1, Technology (are the two linked?)

·         Least important: recruiting=#5, creating culture, engaging students

·         Interconnection between engaging students and creating a culture

 

Other considerations

·         Problem solving organization is single loop organization; learning organization double loop: solve problems and evaluate how you solved problems

·         Not just our end goals but our process makes us a learning college

·         ID our role broader than our gathering data, everyone should be asking “How do we know?” so this should focus on the research function itself

 

Roles and strategies:

·         more proactive involvement in cross-functional teams, define issues and goals all along, connect calendar obligations with goals of college (required student information, external reporting—too much time spend on this)

·         form partnerships with faculty in defining outcomes

·         need to share information more with others to establish priorities

·         coordinate requests for surveys

·         informing community of what is going on; needs to be connected to decision making process

·         changing role should be more proactive in teaching / programmatic issues

·         educating the campus community

·         information least important aspect of decision-making?  Self-interest, what people believe and what they value take precedence.  We should use information more

·         improve decision making, make the information/data more readily available

·         make information generated for external audience available and readable for internal audience

·         Data request form to help prioritize issues; we need to know how to ask questions

·         We need to answer ad hoc requests and also consult with people to see what they really want.  How can we do both with existing staff/resources?

·         New ways to deal with ad hoc requests: online request form or make more information available via internet.  Set up interface to allow for queries of our data.  Spend our time helping others to analyzing the data.

·         Focus more on relational data bases.

·         Two kinds of information we can provide: general data (process-control data from business we do not have to do this but perhaps we should be), 

·         Role shifts from providing analyses of data to providing accessible data bases that we help people query and help create their own analyses

·         Measurement system fundamental to organizational culture

·         Create a set of benchmarks that help us see how we are doing compared to others

·         Help program “administrators” to create new research designs and new data collection processes.

·         Use faculty or release time to be a consultant on how to do “program” research and evaluation

 

Problems

·         Too much focus on external forces

·         Gap between D-M and research data

·         IR presence on all governance teams

 

Roles of Research Staff

·         Devil’s advocate: why do we need research people?  Do away with it and see what we need to bring back.

·         What is role of research under instruction paradigm: external reporting, program review, enrollments, retention

·         What is role of research under learning paradigm?

·         What strategies can we as researchers share or create to help in building learning community?

·         2 problems to solve:  how can researchers contribute more to shift from instruction to learning paradigm; what shifts in our own roles would help make change?

·         At institutional level, researchers are taking on tasks that we haven’t done before (while continue to do what we always done)

·         Before, data collection was linear, we knew who wanted, what it was for, how it would be used; now more collaborative, more people involved, more interactive.  In learning centered environment, letting go of data. 

·         We are key to change in institution based on what we provide for decision making.  Example of cohort study showing that 50% of entering freshman do not persist to spring, 30% only by next fall.  Giving new information can change perceptions.  Just numbers not significant, but knowing “why?” might help me make significant changes.  Different values would react differently to same statistics.  We need more than one kind of information. 

·         We need benchmarks so that our statistics have meaning (it takes us 7 years for our students to graduate doesn’t mean much until we know that the state average is 4 years, so we are almost twice a long)

·         Problem: If retention is the goal, give everyone A’s; learning is the goal and retention is the result.  So we need to be careful what our goals are.

·         Changing our role:  we think we are some of best policy analysts, so get more involved.

·         Encourage and reward faculty for doing outcomes assessment research.  Faculty need more help consulting with IR people.  We don’t want to design every survey but would like to review and help others design. 

·         Move to culture where research occurs throughout campus (get faculty involved, put out a lot of data on the web so individual instructor can draw own conclusions)

·         How can we contribute in a shift in thinking, we are doing a great job, to how can we do better?  Under new way of looking at things, graduation rates becomes important to look at and we ask how can we improve that?  How can we do better in terms of learning vs. in terms of getting more people to come to the school?

·         Need thinking shift that student success becomes everyone’s responsibility; we all own it.

·         What counts as results (bringing in more students, faculty remain current, new programs re community needs—all part of instructional paradigm); with learning, all same questions, plus how can we get graduation rates up. 

·         Funding performance measures changes how we look at things (first thing in to change accounting measures, how we count graduation rates)  marketing is external, not on students we have.  We should share these perceptions as often as possible.

·         Performance based funding, one of the problems is that it is complex and state wants it simple so all they wanted to look at was graduation rates. We couldn’t “benchmark” transfer data consistently across the whole US. Graduation rates should indicate other measures of success we could identify. 

·         Question should be: what are our students’ goals, and how are they meeting them?  They do change their goals, but if 20% of our students say they want to graduate and 20% graduate, then we are doing well.  See information within context.

·         Most important context is compared to what we have done before, not comparison to other institutions because situations different between CCs. 

 

Research Staff

Monday, June 25, 2001

 

Facilitator: Bob Barr

Recorder: Chris Barkley

 

More Roles and Strategies

 

Key Question: What is the difference between assessment, measurement, and evaluation, and what is our role in determining these?

·         Evaluation requires analysis and results in action.

·         We use measurement as part of assessment to focus on outcomes, it is mechanical.

·         Assessment is a review of data, most similar to evaluation.

·         More negative connotations to assessment than to evaluation.

·         Can we measure personality traits, like motivation?

·         Assessment is feedback opportunity, whereas evaluation can lead to a grade.

·         Assessment is a process is way of using information, whereas evaluation is discrete question of what we will consider.

·         Assessment of student for purpose of student and teacher discovering where student is; second level of assessment of student learning for the sake of feedback to the instructor or to the institution. 

·         FTE is a measurement, not an evaluation

 

Redefined roles and strategies of IR within learning college:

·         Being able to measure learning outcomes is one way to assess institutional effectiveness. One role of IR can help us answer 2 questions: Are we measuring what we are teaching well and are we teaching what we should at the institutional level? Make the connection between measuring outcomes for students and for the institution. We need to determine the causes or predictors of student success.

·         How do we frame the questions?  We assume the same structure (classrooms and faculty interacting face-to-face with students) but should we? This leads us to determine only what faculty members can do in the classroom, but we should acknowledge that this isn’t the only place where we can affect student learning.

·         At IR need to look at whole student, not specific competencies in a particular class. Institutional effectiveness can now be determined across the institution and over time. Each unit is to determine measurable outcomes with processes. Assess units to identify their outcomes and ways to measure these outcomes to assist in improving institutional effectiveness.

·         IR office does not provide answers, but gets groups to think about issue again, should be giving out what the score is, but not tell whether this is good or bad (would need to know previous score and subsequent score)

·         What are outcomes? (not hiring another person)  They should be related to the purpose of the unit itself. Defining outcomes are relevant and essential to our purpose (not like environmental issues which we need to do but are not part of student success).

·         Our outcomes are not just the end product (like food in fast food company metaphor), but also human resource outcomes (employee satisfaction) another part of goal even though not connected to sales.

·         Should we be rolling all outcome measures into a single effectiveness score. See effectiveness as a whole but diminishing returns suggests not all improvements will significantly increase our effectiveness, plan strategically where to put efforts.

·         We either satisfy external mandates or evaluating institutional efficiencies.  What remains constant is our role and mission so we should be putting more emphasis on this, but accountability measures may change often and this requires IR to present different data. 

·         Get very efficient at routine external reporting so that you can free up time to work on internal effectiveness. Database tracking systems, to predict standard information like fall enrollments. Database warehouses.

·         Collect SSN on survey data

·         Get state IR association to stop state from asking for data they don’t use.  Consortiums can be effective in affecting change at state level.

·         Redundancy across campuses we should avoid.  We should avoid wasting money.

·         Facilitate reducing the fear of disaggregation of college success data (and tell them where they are going too far.)

·         Put definitions (if needed) and bulleted list of findings and some implications in page 1, then back up by pages of tables, spreadsheets, etc. (one page execute summary)

·         Make sure things get looked at by the way they are presented.

·         Is there a process or group of people who can identify what to look at to lead to action? What if findings suggest a problem but no one is recognizing or doing anything about it? How to make sure these kinds of things related to student success become important in department goals (they aren’t asking right questions)?

·         Warts: example financial aid student satisfaction going down, fewer applications.

·         Sometimes most effective way to have impact is to revert back to old paradigm of dollars for student FTES rather than focusing on student success.

·         Test data on small group.  Create a “data screening” committee (institutional studies data base asking if there are problems in the way we are analyzing the data?)

·         Tie work of IR to institutional strategic planning.

·         We seem to be doing better in institutional effectiveness evaluation rather than in outcomes measures.

·         We should incorporate core objectives and outcomes assessment into all Course Outlines.  Need institutional support.  (got some funding from county)

·         When we do evaluation, make it risk-free for faculty, not connected at all to faculty evaluations. Best to be across campus, not just volunteers.

·         Have faculty report learning assessment projects to the Board

·         Through participation on AACC research committee, trying to get them to set as goal assessment of learning outcomes.

·         Help bring to campus model of questioning, doing study, using study to improve success.

·         Our role will change from responding to ad hoc reporting of specific data, to providing warehouse of data for larger studies.

·         What are we doing on our campuses in response to these roles?