Higher Learning Commission

Nonacademic Assessment: Finding the “Start Line”

Tricia A. Kujawa and Lesley Frederick

Background

The Higher Learning Commission’s (HLC) Criterion 4 requires that “the institution demonstrates responsibility for the quality of its educational programs, learning environments and support services, and it evaluates their effectiveness for student learning through processes designed to promote continuous improvement” (Higher Learning Commission 2015, 10). In response, institutions of higher learning strive to maintain active assessment processes that (a) continuously examine their programs and (b) inform needed improvement. Institutions invest time and personnel in ensuring these assessment processes exist for academic programming, but the Criterion is clear that this expectation is not limited to faculty and an institution’s curriculum. Quality programming in the academic support units is also an expectation.

As an Academic Quality Improvement Program (AQIP) institution, Lincoln Land Community College’s (LLCC) strategic goals and initiatives call for a focus on operational excellence and continuous quality improvement. This commitment to continuous quality improvement requires a shared dedication to quality. As part of this commitment, in 2007 all LLCC departments developed mission statements and goals to guide improvement of programs, operations and services. Thus the foundational documents for an effective assessment process have been in place for almost a decade. Yet when writing the college’s initial Systems Portfolio, it became clear that many departments in the administrative and academic support units were haphazard in their assessment efforts. The feedback on the college’s 2013 Systems Portfolio reflected this status.

The Systems Appraisal team commented on the state of assessment in the academic support units and the need to examine current processes: “To move to a more proactive service approach and better empower staff to make positive changes the College might consider looking at specific processing metrics (labor time, steps, errors, delays, handoffs, processing time), making the information available to staff, and reviewing it regularly to determine progress and needed actions” (AQIP 2013, 24). LLCC, cognizant of this gap in services, created an Institutional Effectiveness (IE) department as part of a divisional restructure the following year. IE was assigned oversight of nonacademic assessment and began to develop structures to support the process. The initial effort included an institutional effectiveness inventory.

The Instrument and Findings

Each Student Services program completed an institutional effectiveness (IE) inventory during the 2014–2015 academic year. Most of the inventories were completed via one-on-one interviews with the program’s director. The instrument, which was a modification of Ronco and Brown’s (2002) Institutional Effectiveness Checklist for Administrative and Academic Support Units, consisted of three parts: the existence of foundational documents, current use of evaluation measures and other information related to improvements and resources.

Foundational Documents

The foundational pieces of assessment included three items: a formal statement of purpose (i.e., mission statement) for the program, explicit goals that support achievement of the mission and procedures to evaluate the extent to which the mission is being achieved. At LLCC, all programs reported having mission statements as well as stated goals that supported the program’s purpose. This was an expected finding because the mission statement and goals are part of the college’s annual planning and budgeting process. Some areas did note that their mission statements and goals were in need of revision.

Half the programs reported having procedures to evaluate the extent to which their goals were being achieved. Of these programs, two shared that they would benefit from refining their approach in a few areas. It was at this point in the inventory that leaders responded to whether their data collection was guided by written outcome statements. Just one program reported specific, measurable outcomes mapping directly to one of the stated department goals. This emerged as the largest and most immediate need for programs to address.

Evaluation Measures

A program can utilize a variety of measurements to determine whether it is achieving its stated outcomes. Eleven such measurements were presented in the second section of the inventory (see Figure 1 for a complete listing). For each, the program leader categorized the measurement’s use as one of three statuses: A = currently being used; B = not currently being used but interested in using; and C = not applicable. With six of the measures, the program leader was also to provide data associated with any item rated as “A” (i.e., currently being used). The responses provided further evidence of whether the leader truly understood each measure, as well as the maturity of the leader’s current process. Later, these responses became the basis for writing the program’s initial assessment plan.

As noted in Figure 1, the most common evaluation or assessment measure at LLCC was simple awareness of volume of activity (Currently Use = 100%). Here, leaders shared examples that commonly involved counting the number of students served or the number of students who participated in a department’s programming. Measures of efficiency were the least used (Currently Use = 11%). This finding was not a surprise because efficiency measures are more commonplace in the service industry. The 100% stacked bar chart (see Figure 1) was an effective way to convey the measurement findings to interested audiences.

smallKujawa Tricia Chart

(full-size image)

This measurement section helped identify growth areas for each program’s assessment efforts. Specifically, the B category (B = not currently being used but interested in using) identified areas in which there was at least some interest for expanding assessment efforts. In LLCC’s Student Services unit, the largest growth areas included developing efficiency measures (Interested = 33%), establishing comparisons with peer institutions (Interested = 33%) and using internal benchmarks (Interested = 67%).

Other Information

Part three of the inventory asked departments to document any service or operational improvement over the past year that was associated with a measure listed in Figure 1. Here, many departments reported improvements but most were not directly tied to the assessment results presented. This suggested that improvements were not necessarily informed by their assessment findings. Department leaders were also asked what, if any, resources were needed to develop program outcomes, implement assessment measures and improve service quality or effectiveness. These identified resources were grouped into themes and shared with the vice president of Student Services for consideration in the next budgeting and planning cycle.

Implications for Practice

Ronco and Brown (2002) had the departments categorize measurement activities as one of three statuses: A = currently being used; B = not currently being used but interested in using; and C = not applicable. After completing a few of these interviews, it became obvious that four categories were needed to effectively capture the current state of assessment efforts. Future use of the inventory will include breaking category B into two statuses: “currently using and satisfied with our approach” and “currently using but our approach is in need of refinement.” This will be helpful because many of the leaders were not confident in their current use of some measures, and it was difficult to capture these improvement opportunities with Ronco and Brown’s (2002) three categories.

The college found that the institutional effectiveness inventory is best administered face to face. In practice, it functions similarly to a structured interview protocol. The inventory was detailed enough to be completed individually by the program leader with the responses later forwarded to IE for review. But the conversations naturally emerging from the interviews added to the richness of the data. The interview format allowed for follow-up questions, offering the interviewer the opportunity to informally gauge each department’s general comfort level with structured assessment processes. In the end, not as much was known about the departments that chose to complete the instrument on their own.

Finally, some program leaders did not understand all the measurements listed in the instrument. They did not really know, for example, how a peer benchmark is created and used. This presented a barrier for responding to the inventory items, and some were hesitant to ask for clarification. To counter this, examples of each measurement were provided during the interview. Providing contextual clues that were specific to the unit actually stimulated thoughts of possible measurements for future assessment efforts.

Conclusion

As Magruder, McManis and Young (1997) found in their work at Truman State, successful assessment is not just a collection of techniques tied to outcomes. Rather, it is a cultural issue that affects how the institution defines its responsibility to students. With a “start line” established, the work needed to transform the information collected via the inventories into a successful assessment culture was defined. The institutional effectiveness inventory provided a picture of both current practice (e.g., status of the foundational documents, what assessment is currently taking place) and a road map for moving the unit forward (e.g., what assessment departments would like to conduct, the needed resources and each department’s perceived challenges). The IE department has been using this information to design professional development, institutional structures and resources for the academic support units. From this perspective, the instrument provided the basis for (a) creating outcomes-based assessment plans and (b) documenting the results along with any associated improvements for each nonacademic department.

References

Academic Quality Improvement Program. 2013. Systems appraisal feedback report in response to the Systems Portfolio of Lincoln Land Community College. Chicago: The Higher Learning Commission. http://www.llcc.edu/wp-content/uploads/2014/10/Systems-Appraisal-Feedback-Report-2013.pdf.

Higher Learning Commission. 2015. Higher Learning Commission 2015 resource guide. Chicago: The Higher Learning Commission. http://download.hlcommission.org/ResourceGuide_2015_INF.pdf.

Magruder, J., M. A. McManis, and C. C. Young. 1997. The right idea at the right time: Development of a transformational assessment culture. New Directions for Higher Education, 100: 17–29.

Ronco, S. L., and S. G. Brown. 2002. Finding the “start line” with an institutional effectiveness inventory. AIR Professional File 84: 1–12.

 

About the Authors

Tricia A. Kujawa is Director of Institutional Effectiveness and Lesley Frederick is Vice President of Student Services at Lincoln Land Community College in Springfield, Illinois.

Browse Related Papers

(by Keyword)

Institutional Effectiveness
Quality Improvement

Copyright © 2017 - Higher Learning Commission

NOTE: The papers included in this collection offer the viewpoints of their authors. HLC recommends them for study and for the advice they contain, but they do not represent official HLC directions, rules or policies.


Higher Learning Commission • 230 South LaSalle Street, Suite 7-500 • Chicago, IL 60604 • info@hlcommission.org • 800.621.7440

Home | About HLC | Contact Us | Privacy Policy

YouTube graybkgrdLinkedIn graybkgdTwitter graybkgd