Higher Learning Commission

2015 Collection of Papers

Self-Rating: A Case Study in Indirect Assessment Measures

John Patterson

There are many ways to teach and learn, and both are best accomplished when measured from the perception of the end user: the individual student. One of the most important responsibilities of any teacher is to identify optimal teaching approaches for each student and to then utilize multiple assessment measures to demonstrate successful learning. Utilizing purposeful direct and indirect assessment measures optimizes the target outcome of aligning assignments, course objectives and individual student goals.

The case study demonstrates the successful implementation of an end-of-course, self-rating indirect assessment exercise in which students self-rate their ability to successfully complete, or perform, each objective after having completed the course. The self-rating is accomplished through a Likert scale, and the data is compiled to represent students’ input from an entire course. This data can then be evaluated against other courses and other professors. The data from this indirect assessment measure not only complements the direct assessment measures used but also provides greater insight into the overall student experience.

Self-Rating: A Case Study in Indirect Assessment Measures

Assessment of student learning has evolved over the years in higher education. The most common method of assessing student learning has been requiring students to take an examination at the end of a course. If the student completed the examination with a passing score, he or she was deemed to have learned the course material. Incorporating both direct and indirect assessment measures into overall student learning assessment has become a more preferred approach because it maximizes the ability to align assignments, course objectives and individual student goals.

Case Study

The self-rating exercise used in every course in Bellevue University’s Master of Science, Organizational Performance (MSOP) Program is an indirect assessment measure. After completing the course, students self-rate their ability to successfully complete, or perform, each course objective.

The Exercise

After completing a course, students are given an assignment to self-rate their ability to better accomplish each course objective. For example, the third course objective for the Coaching and Mentoring for High Performance course (MSOP 610), is “Develop methods that increase the quality of feedback in organizations.” For this course objective, the assignment question would read:

  • To what degree are you better able to develop methods that increase the quality of feedback in organizations?

–      1. Much More

–      2. More

–      3. No Change

–      4. Less

–      5. Much Less

Students answer this question for every course objective, and data is compiled on students’ collective answers. To continue the example, the initial findings for the MSOP 610 course objective commonly look like this:

  • MSOP 610, Course Objective #3

–      1. Much More  50%

–      2. More  45%

–      3. No Change  5%

–      4. Less

–      5. Much Less

The Results

Based on the example above, the data shows that after completing the course, 95 percent of the students indicated they were either “much more” or “more” able to better develop methods that increase the quality of feedback in organizations. This percentage in this context is very encouraging, but the other 5 percent (No Change) provides other learning opportunities, such as the opportunity to improve the assignments aimed specifically at that course objective, the opportunity to improve the readings and research aimed specifically at that course objective, and the opportunity to improve our teaching methods toward that course objective.

Examples in Using This Data

Example 1

In this example, several professors teach the same course and receive lower scores on the same course objective. Data on the students’ responses are shown in Table 1:

Table 1

MSOP 610 Obj. #1 Obj. #2 Obj. #3 Obj. #4
Jane 96% 78% 97% 92%
Tom 92% 68% 95% 91%
Ed 93% 80% 92% 90%
Sally 96% 72% 99% 88%
Bill 90% 75% 94% 89%

 

Learning opportunities or administrative adjustments that might be made based on this data include ruling out professors as a problem area, rewriting the assignments aimed at the course objective and updating the assigned reading and research aimed at that course objective.

Example 2

In the second example, one professor receives a score on all course objectives that is lower than that of his or her peers. Data on the students’ responses are shown in Table 2:

Table 2

MSOP 610 Obj. #1 Obj. #2 Obj. #3 Obj. #4
Jane 96% 92% 94% 89%
Tom 72% 70% 68% 74%
Ed 92% 93% 90% 91%
Sally 89% 97% 93% 93%
Bill 93% 99% 93% 89%

 

Learning opportunities or administrative adjustments that might be made based on this data include mentoring or coaching the professor to perform better or removing the professor entirely.

Example 3

In this example, one course consistently receives scores on all course objectives that are lower than those of other courses in the program over time. Data on the students’ responses are shown in Table 3:

Table 3

  Obj. #1 Obj. #2 Obj. #3 Obj. #4
MSOP 602 96% 92% 94% 89%
MSOP 606 72% 70% 68% 74%
MSOP 610 92% 93% 90% 91%
MSOP 614 89% 97% 93% 93%
MSOP 618 93% 99% 93% 89%

 

Learning opportunities or administrative adjustments that might be made based on this data include rewriting all assignments, updating all assigned readings and research, getting different professors to teach the course or eliminating the course entirely.

The Value of the Assessment

This assessment has generated three distinct value-added outcomes. First, it provides data and insight into items that other assessment methods do not. Second, by complementing other assessment methods, the overall assessment is stronger and more valid. And, finally, it provides a better linkage between the experiences and perceptions of students and their performance levels.

Conclusion

The use of this assessment has prompted the following performance improvements:

  • Percentage scores in subsequent cohort offerings have increased.
  • Professors’ evaluations have improved.
  • Students’ performance has improved.
  • Assignments, course objectives and individual student goals have become more aligned.

 

 

About the Authors

John Patterson is Professor, College of Business, at Bellevue University in Bellevue, Nebraska.

Copyright © 2019 - Higher Learning Commission

NOTE: The papers included in this collection offer the viewpoints of their authors. HLC recommends them for study and for the advice they contain, but they do not represent official HLC directions, rules or policies.


Higher Learning Commission • 230 South LaSalle Street, Suite 7-500 • Chicago, IL 60604 • info@hlcommission.org • 800.621.7440

Home | About HLC | Contact Us | Privacy Policy

YouTube graybkgrdLinkedIn graybkgdTwitter graybkgd