2015 Collection of Papers

Better Together: The Multi-State Collaborative to Advance Learning Outcomes Assessment

Julie Carnahan and Terrel Rhodes

The Multi-State Collaborative to Advance Learning Outcomes Assessment (MSC) is an initiative designed to provide meaningful evidence about how well students are achieving important learning outcomes. Sponsored by the State Higher Education Executive Officers Association (SHEEO) and the Association of American Colleges and Universities (AAC&U), the initiative foregrounds a distinctly different form of assessment from the traditional standardized test. Instead of producing reports about average scores on tests, the project pilot is the application of common rubrics by teams of faculty members to students’ authentic college work—including such things as projects, papers and research. The MSC is designed to produce valid data summarizing faculty judgments of students’ own work and also seeks to aggregate results in a way that allows for benchmarking across institutions and states. The initiative’s primary goal is to provide data that will allow faculty members and institution leaders to assess—and improve—the levels of student achievement on a set of cross-cutting outcomes that are important for all disciplines.

In its initial phase of work, the project is evaluating student achievement of three of the most important outcomes of a college education—written communication, quantitative reasoning, and critical thinking. In its first year, the project is examining student work from 68 colleges, community colleges and universities in nine states. The project builds on efforts in Massachusetts (as part of its Vision Project) and on AAC&U’s LEAP (Liberal Education and America’s Promise) initiative, through which it developed a common set of rubrics to assess the LEAP Essential Learning Outcomes.

MSC Guiding Principles

  • Any system of assessment should help build and support a culture of student learning that allows for assessment results to be used by each campus and by larger public systems for improving student learning and for program improvement.

  • Any statewide or campus plan for assessment should be based on authentic student work and allow for the use of multiple measures of student learning—indirect, direct, and embedded—without a single mandated statewide test.

  • A common framework is needed for any credible statewide system of assessment and accountability. The AAC&U LEAP Essential Learning Outcomes and VALUE (Valid Assessment of Learning in Undergraduate Education) rubrics, designed to assess the Essential Learning Outcomes, together provide a useful framework given their broad adoption nationally and their endorsement both within and outside of higher education institutions and systems.

  • Assessment approaches should involve an iterative process and, as such, be viewed as works in progress.

  • Assessment is most effective when it reflects an understanding of learning as multidimensional, integrated, and revealed in performance over time.

Who Is Leading and Participating in the MSC? How Will It Work?

Nine states have agreed to pilot test the VALUE approach to student learning assessment. These states’ higher education executives have designated a cohort of campuses—including two-year and four-year institutions—to participate in the first phase of the project. Sixty-eight institutions are currently participating.

The MSC is being led by a steering committee of individuals at SHEEO, AAC&U and each of the nine participating states. Each campus has designated a campus leader, and in its first year the project is engaging hundreds of faculty in local activities to build capacity to assess and improve student learning. Over time, the project will build on early efforts and seek to build ongoing opportunities for faculty members to learn from and share information with colleagues across departments, institutions, and states about assignment design, assessment techniques, and teaching and learning approaches that increase student success.

About the MSC Pilot Study

Designed to test the feasibility, validity, reliability, and sustainability of the MSC assessment model, the pilot study is process-oriented. Analysis of data generated through the application of VALUE rubrics will be undertaken to test the efficacy of aggregating and presenting data about student achievement of key outcomes—for state-level comparison and benchmarking against similar institutions. The pilot study is designed to continue testing the validity and reliability of the VALUE rubrics, and is also a proof of concept study intended to determine:

  • The ability to generate data that faculty members can use to improve student learning and to meet public calls for accountability

  • The ability to use the collected representative samples of student work with a sufficient degree of randomization to draw valid conclusions

  • The appropriate-level resources needed to build state, institution, and faculty support for this type of assessment

  • Ways to improve the model and sustain the work

Student work will be collected in fall 2014 and scored in spring 2015 by faculty members from participating institutions.

About VALUE

The MSC initiative builds on AAC&U’s VALUE initiative launched in 2007 as part of LEAP. VALUE is a campus-based assessment initiative originally supported with grants from the State Farm Companies Foundation and the Fund for the Improvement of Post-Secondary Education. VALUE provides tools to campuses of all types to assess authentic student work and determine whether and how well students are progressing toward graduation-level achievement in learning outcomes considered essential by employers and educators.

VALUE was developed and tested on hundreds of college, community college and university campuses using 16 rubrics aligned with the LEAP Essential Learning Outcomes. Those rubrics address achievement of such outcomes as critical thinking, written communication, oral communication, ethical reasoning, global learning and quantitative reasoning, among others. Each of the VALUE rubrics has undergone multiple rounds of testing on campuses using actual student work and has been revised based on feedback from faculty members.

Timeline

The MSC initiative was formally launched in fall 2013. Campuses and institutions are collecting student work products during 2014 and early 2015 and will begin the process of scoring student work with teams of faculty from all types of colleges and universities in February 2015. Scoring will be completed by May 1, 2015. Data analysis will be completed by June 2015, and the final report on the pilot study will be completed by September 2015.

MSC Resources

SHEEO hosts a project website with resources for project participants, including sample news releases; information about sampling procedures, assignment design, uploading student work to the Taskstream database, and other communications documents; frequently asked questions; recorded webinars; a list of participating institutions; and a roster of the steering committee members.

What Will You Learn from This Session?

The presenters and representatives from the Multi-State Collaborative states will discuss what has been learned about the initiative since it was launched in the fall of 2013, including the challenges of coordinating a project of this size across nine states and 68 institutions totally dependent on the work and buy-in of faculty, staff, and students at these institutions. In addition to the challenges of this work, the presenters will discuss the long-term vision of the MSC and future plans to expand to more states and institutions.

 

About the Authors

Julie Carnahan is Senior Associate at the State Higher Education Executive Officers Association, and Terrel Rhodes is Vice President at the Association of American Colleges and Universities.