Higher Learning Commission

Assessment Made Meaningful: Removing Obstacles to Participation

Gina Kamwithi

No matter where one falls on the philosophical divide of assessment, the concept of evidence-based approaches to teaching and learning will not go away anytime in the near future. Despite the hopeful yearnings of many faculty members, assessment has proven to be a non-fad. Or, if it is a fad, it is one of the most enduring fads of higher education (Gilbert 2010). Ignoring this reality comes close to the shortsightedness displayed in the following urban legend, which has traveled the Internet for more than a decade. The story is purported to be a transcript of an actual radio transmission between a U.S. naval ship and Canadian authorities off the coast of Newfoundland in October 1995:

US Ship: Please divert your course 0.5 degrees to the south to avoid a collision.
CND Reply: Recommend you divert your course 15 degrees to the South to avoid a collision.
US Ship: This is the Captain of a US Navy Ship. I say again, divert your course.
CND Reply: No. I say again, divert YOUR course!
CND Reply: This is a lighthouse. It’s your call. [United States Navy 2009]

In terms of higher education, assessment is “a lighthouse.” This paper explores obstacles to change in relation to assessment and the concrete actions that can be taken to move toward the goal of making assessment meaningful.

As a professor attempting to teach students the concepts behind behavior change, I often asked, “How do you think SeaWorld selects the orcas that are the central attraction to their parks? Do they sail out to deep water, extend a long rod over the side of the boat and wait for the rare fish to clear the rod? Obviously, no, they do not.” Despite many ethical issues surrounding Sea World and its use of operant conditioning on wild animals, the illustration is useful. The technique is the basis for all foundational learning. Over many millennia, parents have utilized operant conditioning to teach their children to walk. Think of your own children. Their first steps were cause for celebration. Clapping, hugging, kissing and smiles all present and all precious to a little one learning something new. Operant conditioning consists of the positive reinforcement of the gradual approximation of a desired behavior. Why is it then, as administrators who often still teach in the classroom, that we find it nearly impossible to teach faculty members the fundamentals of assessment? Time and again, I have seen a situation in assessment processes that is analogous to a parent holding a stopwatch in front of a toddler, expecting a four-minute mile and punishing them for wobbling and falling. Following are three components I have seen consistently in the institutional deployment of assessment that have contributed to a stalled or failed assessment program:
  • Confusing jargon
  • Unrealistic expectations
  • Difficulty in accountability

Confusing Jargon

Assessment is complex. Knowing what is truly learned by students is not easy and can never be considered an exact science. Why, then, is the process confused with jumbled jargon? Review your assessment material; are you using different terminology for the same concept in a variety of places (your website, catalog, training material, or your data gathering software)?

At North Central State College (NCSC), assessment meetings have dissolved into utter confusion because of the inconsistent application of naming conventions. What eventually became known as “college-wide outcomes” were variously called “core learning outcomes,” “college outcomes,” “student outcomes,” and “college core outcomes.” To confuse matters more, the college’s general education curriculum was called the “core.” In addition, for several years program outcomes were known as “student learning outcomes” in various settings.   

One of the first steps in re-invigorating the college’s assessment program was the creation of an intuitive set of naming conventions and strict adherence to their usage throughout all media, discussion, and training. Thus, the system has been simplified into two important data-gathering objectives: “College-Wide Outcomes” using AAC&U VALUE Rubrics, and “Program Outcomes” utilizing a variety of department-approved rubrics. This simplicity helped in a focus on the important issues, one of which was teaching the fundamentals of assessment—which leads to the second point of failure for many assessment programs.  

Unrealistic Expectations

The analogy used earlier of a parent facing a toddler with a stopwatch is an apt description of how many institutions approach assessment. We often forget that until a complex concept is understood deeply, widespread participation in activities surrounding that concept will rarely be achieved. Complexity can only be explored when built on a solid understanding of fundamentals. For example, to play the cello, a cellist must practice the fundamentals endlessly, so that eventually the nuance of bow pressure, speed, and fluid movement can be so embedded into the cellist’s muscle memory that artistry of expression can flow without the distracting thoughts of “How is my bow pressure? Am I moving the bow too fast?” and so on. This is where simplicity in design of the assessment program and positive reinforcement of approximations of the desired behavior are critical. Faculty members are content experts. A rare few have spent time on the scholarship of teaching and learning specific to assessment. Thus, faculty members must be given space and time to practice the fundamentals of assessment. Great assessment programs are years in the making, not months.

Difficulty in Accountability

The last point of failure is lack of accountability. This is tricky. If you are a first line supervisor of faculty members and have experienced the confusion of jumbled jargon and your own frustration of the assessment team’s unrealistic expectations, it may be difficult for you to hold faculty members accountable for assessment activities. To hold anyone accountable for their actions, several conditions must exist. You must believe that the activity required is valid, the target is stable, and the outcomes are meaningful. Following are the concrete steps NCSC took to achieve widespread participation in addressing the three points of failure discussed above.   

Action Steps


At NCSC, we agreed on one term or phrase to describe the six outcomes that our faculty members believed all graduates should achieve. We use the phrase “College-Wide Outcomes” consistently and do not use any similar phrase for other activities. We also agreed on one phrase, “Program Outcomes,” to describe the outcomes for any given program. Again, we use this phrase consistently and do not use any naming convention or phrase similar to it for other activities. In addition, we ensure that no material—such as handbooks, the website, white papers, yearly training and so on—refers to any of these concepts with an incorrect naming convention or phrase.


We focused on one goal for one academic year. We placed a moratorium on program outcome reports for two years, while we practiced the deployment and use of college-wide outcomes. The goal was 100-percent faculty participation, which was eventually achieved. All faculty members were required to deploy one assessment, using one VALUE Rubric, within the learning management system during the fall semester 2013. The second year, faculty members were required to deploy two assessments each semester using two VALUE Rubrics. By the third year, every time a faculty member taught a course in which a VALUE Rubric was indicated for a college-wide outcome on the syllabus, he or she was required to deploy the assessment. In addition, we conducted in-service meetings where faculty members brought assignments and gathered in groups under posters of one of the outcomes and discussed how they interpreted each component of the VALUE Rubric related to that outcome. In essence we created an atmosphere in which learning could happen. We gave the faculty time and space to practice the fundamentals of college-wide assessment and made the software used to gather the data as easy as possible.


Accountability is inextricably linked to good design. Just as an ethical salesperson cannot bring him- or herself to convince a buyer to purchase a faulty product, no supervisor should be forced to convince faculty members to participate in a poorly constructed assessment process. This does not mean it must be a perfect process—a perfect process is most likely unattainable. However, as administrators who manage assessment activities, we must not indulge in rationalization or console ourselves with the fallacy that lack of participation is always due to faculty stubbornness. Sometimes it is due to our own lack of attention to detail or an unwillingness to make it “easy” for faculty members. However, if we have made the process simple, clear, focused and doable, accountability will be much easier to sustain.


Inevitably, assessment in higher education can be the “lighthouse” we will use for guidance, or we can ignore it and find we have foundered on the shores of accountability.

Note: This paper is one in a series of brief papers exploring the components of successful assessment deployment in higher education. An earlier paper, “Facing Our Worst Fear: Assessment, Success Funding and Selling Out,” which appeared in the Higher Learning Commission’s 2015 Collection of Papers on Self-Study and Institutional Improvement, addresses the philosophical battles in relation to assessment.


Association of American Colleges and Universities. n.d. VALUE Rubrics. http://www.aacu.org/value-rubrics.

Baker, F., and A. Holm. 2004. Engaging faculty and stu/dents in classroom learning assessment. New Directions for Community Colleges 2004 (126): 29–42.

Gilbert, G. 2010. Making faculty count in higher education assessment. Academe 96 (5): 25–27.

Hutchings, P. 2010. Opening doors to faculty involvement in assessment. Champaign, IL: National Institute for Learning Outcomes Assessment.

Kamwithi, G. 2015. Facing our worst fear: Assessment, success funding and selling out. In A collection of papers on self-study and institutional improvement, 2015. Chicago: The Higher Learning Commission. http://cop.hlcommission.org/Assessment/kamwithi2015.html.

Klein, S. 2010. The Lumina Longitudinal Study: Summary of procedures and findings. New York: Council for Aid to Education. http://cae.org/images/uploads/pdf/12_CLA_Lumina_Longitudinal_Study_Summary_Findings.pdf.

Kramer, P. I. 2009. The art of making assessment anti-venom: Injecting assessment in small doses to create a faculty culture of assessment. Assessment Update 21 (6): 8–10.

Schlitz, S. A., M. O’Connor, Y. Pang, D. Stryker, S. Markell, E. Krupp, C. Byers, S. D. Jones, and A. K. Redfern. 2009. Developing a culture of assessment through a faculty learning community: A case study. International Journal of Teaching and Learning in Higher Education 21 (1): 133–147.
United States Navy. 2009. “The Lighthouse Joke.” http://www.navy.mil/navydata/nav_legacy.asp?id=174.


About the Author

Gina Kamwithi is Academic Services Director at North Central State College in Mansfield, Ohio.

Copyright © 2017 - Higher Learning Commission

NOTE: The papers included in this collection offer the viewpoints of their authors. HLC recommends them for study and for the advice they contain, but they do not represent official HLC directions, rules or policies.

Higher Learning Commission • 230 South LaSalle Street, Suite 7-500 • Chicago, IL 60604 • info@hlcommission.org • 800.621.7440

Home | About HLC | Contact Us | Privacy Policy

YouTube graybkgrdLinkedIn graybkgdTwitter graybkgd