Higher Learning Commission

2015 Collection of Papers

Aligning Assessment: Developing Performance Tasks to Assess GE Outcomes

Joan Hawthorne, Melissa Gjellstad and Ryan Zerr

Assessing learning outcomes for general education programs is a perennial challenge for institutions striving to take such programs seriously. General education (GE) courses typically are completed by lower-division students, but learning related to program outcomes is intended to continue through courses in the major and other elements of the undergraduate academic experience. So how and when should the assessment occur? If the aim is to evaluate outcome-level performance, then one significant challenge is the need to ensure that students nearing graduation generate appropriate work products that demonstrate the outcomes in question. Students must be motivated to take the work seriously, resulting in products that accurately demonstrate their learning (Banta and Palomba 2015). There is also a need to manage the logistics of collecting work from seniors across multiple majors. And this still leaves the matter of how to achieve consistency—both in terms of the nature of the students’ work and the way that it is assessed. Even though scoring can be done with a common rubric, faculty members may experience difficulties when reading and assessing a variety of work products from fields dramatically removed from their own areas of expertise. But if only faculty members in the major score student work, it is difficult to ensure trust in the consistency of definitions and standards.

At the University of North Dakota, our earliest efforts to assess general education learning outcomes relied on the collection of artifacts from students enrolled in capstone courses taught within the major but calibrated to the university’s GE program. Although this generated variety in student work products, the approach functioned reasonably well in our initial efforts, which focused on learning outcomes that are core to most academic disciplines, such as critical thinking, information literacy, and written communication. However, other intended outcomes, like diversity, were addressed in a smaller percentage of the capstones, with the result that campus assessment activities relied on work completed in lower-division GE courses. The rationale for this seemed obvious: we wanted to avoid skewing the sample toward students in programs in which diversity was a core focus of the major and thus an element reasonable to expect as part of work completed in a capstone project. But the decision did not come without a cost; we lost the ability to collect and score work products demonstrating outcome-level performance, because these mostly lower-division courses are typically taken by students early in their program of study.

Our assessment cycle next required us to focus on two GE outcomes that would prove similarly difficult to assess—quantitative reasoning (QR) and oral communication (OC). QR presented a challenge similar to that faced in assessing achievement of the diversity outcome: we wanted to look at the learning senior students could demonstrate, but seniors in English or fine arts capstones were very unlikely to produce work products that could be assessed with our QR rubric. And seniors in physics or math capstones were very unlikely to produce work products readable by faculty members outside the STEM fields. With OC, the difficulty anticipated was in artifact collection. Although many seniors are expected to make oral presentations in their capstone courses, those presentations are rarely captured in any form that would allow them to be viewed and scored at a later date by multiple faculty members. Yet, if we chose not to collect work products from courses, how could we ensure that the products we assessed represented students’ “best work”? And how could we collect work products demonstrating outcomes of interest (in this case, oral communication and quantitative reasoning) from a random selection of senior students?

Performance Tasks as an Assessment Strategy

Our response to these questions was to reconceptualize our GE outcomes assessment process. We moved away from student work completed in courses and began using work generated by senior students in response to an interdisciplinary task independent of the course and major. Thus, all students would complete the same task—essentially a “performance task”—regardless of whether their degree program was in fine arts, finance, or physics. The performance task concept, familiar to users of the Collegiate Learning Assessment (CLA) or participants in the CLA Academy, involves real-world scenarios in which an issue of some sort is identified and students are asked to assume a role through which they articulate a solution or response to the issue (Council for Aid to Education 2014).

Performance tasks provided us with an alternative approach to the collection of student work products. Whereas artifacts generated in capstone courses can reliably represent students’ “best work” because of the motivational impact of grades, performance tasks must be designed in ways that make them intrinsically engaging and, therefore, also capable of bringing out the best from students. Because many faculty members from our institution had previously participated in a CLA Academy training during which they had learned how to develop and incorporate performance tasks into their teaching, the concept of designing and deploying a performance task was not unfamiliar. However, our use of performance tasks for institution-wide assessment of GE outcomes would be new to campus.

We developed our plan for using performance tasks in partnership with capstone faculty members who were already invested in GE assessment. Instead of submitting student work products, capstone faculty members would encourage their senior students to participate in an out-of-class assessment process that would occur during a newly established Assessment Week (held in February). Faculty members would recruit students and motivate participation using whatever approach was most appropriate in their own context. Some chose to offer bonus points. Some “required” students to show up for the assessment (and asked us to track attendance). Some used moral suasion, made possible because senior students had often formed close ties with their major faculty and proved willing to participate when the capstone instructor asked.

Students who signed up for Assessment Week were randomly assigned to either the QR or the OC assessment, and that process allowed us to recruit a sufficiently large (and diverse) sample of participants for both assessments on that year’s cycle. At the same time as students were being recruited to participate in the assessment process, faculty members were invited to help with the creation of tasks that would serve as effective measures of those two critical outcomes, as defined within our own GE program.

Carrying Out the Plan

Recruiting faculty for performance task development proved surprisingly straightforward. A certain number of faculty members have an intrinsic interest in any GE outcome, while others have a special commitment to the capstone or to the GE program as a whole. Those individuals with a special interest in QR or OC were quick to volunteer. Many who had attended the CLA Academy were willing to help in some capacity. Furthermore, faculty members who had been involved in our earlier efforts at GE outcomes assessment were curious about this new approach. Those faculty members had generally been pleasantly surprised by how interesting an outcome assessment process could be. Each scoring session had generated extensive discussion about the meaning and nature of our learning outcomes, the rubrics used to score student work products, the learning actually demonstrated by participating students, and the adequacy (or inadequacy) of our GE curriculum in helping students develop needed skills. Faculty members who had participated in those kinds of experiences proved quite willing to be involved in the new process.

Teams of faculty members were organized around each of the two target outcomes, and, following initial brainstorming discussions, those teams appointed small working groups of faculty members to generate the tasks themselves. The most pressing challenges for the task development teams were to ensure that the work products generated would be genuinely aligned with the rubric while also creating a task that would be highly engaging, real-world, and sufficiently motivational for students to willingly invest 90 minutes of their “best effort” on it—for no grade and no direct personal benefit. In the case of both teams, the final tasks were developed around challenges, actual or simulated, facing students as graduating seniors.

For QR, the final task asked students to make a decision about job opportunities, and they were provided with a number of quantitative documents to help them do so. They considered cost and quality information about transportation, rental housing, home-buying, medical care, climate, and other potential factors of interest. The task was structured around the idea that each student’s parents had examined the information and offered an opinion on which employment opportunity the student should accept. The student needed to determine whether that parental recommendation aligned with his or her own analysis and decision and, based on that analysis, articulate an argument that either accepted or rejected parental advice.

For OC, students were presented with a scenario involving tuition rates and asked to pitch their candidacy for a tuition task force. Because rates were an ongoing subject of debate in the state (and a perennial subject of interest for our students, most of whom graduate with debt), the topic was likely to elicit engagement. The scenario suggested that our state board of higher education intended to develop policies around this issue, and our institution’s president had been asked to appoint a student to be part of the task force addressing tuition rates and policies. Interested senior students were being invited to apply for that task force slot by submitting brief videos (up to five minutes long) of themselves making the case for their appointment to the board—and students were told to assume that they were eager to be that appointee. Each student was provided with financial and tuition data from multiple perspectives, along with information about the selection process and criteria, and each was asked to record a presentation that would serve as an application.

Staff and faculty members from a number of different areas of campus helped facilitate the actual Assessment Week. Instructional Technology staff made sure we had computer classrooms available or provided computers to classrooms where Assessment Week sessions would occur. They tailored our learning management system as the venue for hosting the performance tasks, collecting the work products and storing them for a subsequent scoring session. Institutional Research office staff coordinated student sign-ups and managed room assignments and communications with students. The assessment director recruited and prepared faculty to serve as proctors, drawing on faculty from both the GE and assessment committees, as well as capstone faculty members and those who had been involved with the task development teams. The GE director recruited faculty members for scoring and took responsibility for ensuring that findings would be compiled and reported back to campus.

Lessons Learned

We have completed a first cycle of Assessment Week activities, using locally developed performance tasks, and we are gearing up for a second. Looking back, it is possible to see decisions and processes that proved important to our initial success, such as enlisting the help of faculty members who were already invested in our GE program—whether through past outcomes assessment activities, the teaching of GE capstones, or their interest in our QR or OC goals. This breadth of faculty involvement also ensured broad representation of graduating students from a randomly generated, diverse cross section of campus academic disciplines. The choice of assessment strategy was also critical. Developing performance tasks focused faculty on the elements of our GE learning outcomes that genuinely cut across majors and should be achieved by all students, a key criterion for effective assessment of institution-wide outcomes (Banta and Palomba 2015). The tasks themselves turned out to be very engaging for students, and, as noted by both proctors and scorers, most students took them very seriously despite the lack of a grade incentive. As a result, faculty members who scored the work products felt confident that they were seeing an accurate picture of our students’ learning as they neared graduation.

In sum, embarking on this new process to assess GE learning outcomes on our campus has enabled us to address concerns of consistency in student product and faculty scoring, which ultimately has resulted in a more meaningful—and thus actionable—measure of students’ composite execution and performance of critical skills.

REFERENCES

Banta, T. W., and C. A. Palomba. 2015. Assessment essentials: Planning, implementing, and improving assessment in higher education, 2nd ed. San Francisco: Jossey-Bass.

Council for Aid to Education. 2014. CLA+ practice performance task. New York: Author. http://cae.org/images/uploads/pdf/CLA_Plus_Practice_PT.pdf.


 

 

About the Authors

Joan Hawthorne is Director of Assessment and Accreditation, Melissa Gjellstad is Associate Professor of Languages–Norwegian, and Ryan Zerr is Professor at the University of North Dakota in Grand Forks.

Copyright © 2017 - Higher Learning Commission

NOTE: The papers included in this collection offer the viewpoints of their authors. HLC recommends them for study and for the advice they contain, but they do not represent official HLC directions, rules or policies.


Higher Learning Commission • 230 South LaSalle Street, Suite 7-500 • Chicago, IL 60604 • info@hlcommission.org • 800.621.7440

Home | About HLC | Contact Us | Privacy Policy

YouTube graybkgrdLinkedIn graybkgdTwitter graybkgd