The growth of online education mandates that universities develop effective and efficient strategies to monitor and support best practices in the online classroom. Although traditional peer review and mentoring is effective for supporting quality teaching, the time-intensive nature of these approaches limits their value for continuous, consistent monitoring and support. As a supplement to these strategies, the integration of teaching analytics within a holistic model of faculty support provides an efficient means of fostering high-quality online instruction.
Unique to the online classroom, teaching in the virtual environment leaves a digital footprint of instructional activities and interactions. Learning management systems (LMS) have the potential to record a host of instructional behaviors that provide insight into faculty effectiveness, such as:
- Frequency of logins to the online course environment
The recording and monitoring of these digital instructional footprints is limited only by the analytic capabilities of the LMS. As these systems become more complex, so do the metrics that they can provide. The key to utilizing teaching analytics in a meaningful fashion lies in tying LMS metrics to institutional standards for online teaching.
Best practices in online education highlight a number of key competencies that underlie a quality teaching-learning experience. These competencies, in turn, can be operationalized into observable, measurable instructional behaviors that serve as the baseline for teaching expectations at the institution. For example, online teaching best practices highlight “active engagement” as an essential instructional competency. Reflecting on common LMS teaching analytic data, there are a number of behavioral indicators that may provide insight as to whether an instructor is actively engaged in the online classroom, such as (1) the time an instructor spends logged into the online classroom, (2) the timeliness of student feedback, or (3) the number of instructor posts to the asynchronous discussion board. From the available behavioral indicators, institutions can establish clear, precise expectations for faculty members; in this case, faculty expectations may include such standards as (1) faculty members must log in to the online classroom a minimum of four days per week; (2) faculty members must not be absent from the online classroom for more than 48 hours; (3) faculty members must post meaningful interactions in the asynchronous discussion forum a minimum of four days per week; or (4) faculty members must provide feedback to students on written assignments within seven days of assignment submission.
While no single teaching artifacts can be used in isolation to measure active engagement or instructional effectiveness, they can provide a baseline metric to identify instructors who fail to meet institutional expectations for quality teaching. For example, an instructor may log in to the online classroom and leave the application open while doing other things; in this case, login time would not accurately assess engagement. However, an instructor who does not log in to the online classroom for a number of consecutive days clearly fails to meet expectations for active engagement. Likewise, while LMS analytics cannot differentiate between high-quality instructional discussion posts and filler discussion comments (i.e., “well done” or “good point”), analytics can quickly and easily pinpoint instructors who do not post at all in a discussion thread. Thus, while teaching analytics are not a replacement for peer review or mentoring, the integration of analytic data provides a low-cost, efficient means by which institutions can provide ongoing, continuous monitoring and support to promote quality online teaching.
To be effective, teaching analytics need to be used within the scope of what they can actually convey. A single LMS artifact cannot differentiate between an effective and an ineffective instructor, but teaching analytics are an efficient indicator of instructors who may be struggling. Immediate identification of teachers who fail to meet institutional expectations for online instruction allows institutions to intervene in a timely manner and address challenges within the confines of an active class. Rather than waiting for summative teaching evaluations to pinpoint instructional difficulties, teaching analytics immediately identify instructors who are struggling in the online classroom so that limited faculty development time and energy can be invested in those who need it most.
Grand Canyon University (GCU) uses teaching analytics on both micro and macro levels. At the micro level, GCU uses data points from the LMS to encourage faculty members to post in the classroom on a regular basis to build a highly engaging discussion-driven environment. These data points reflect the last time an instructor posted in the discussion forum, and then integrated technologies send the instructor an email should he or she fail to meet institutional expectations regarding regular classroom engagement. The technologies also integrate with the LMS to email the instructor when assignment feedback is not provided to students in a timely manner. In both cases, the function of the monitoring and notification is not punitive but rather provides a reminder to the instructor along with additional information to assist him or her to be more effective on the target dimension.
Beyond baseline identification of struggling instructors, teaching analytics can be used in conjunction with other indicators of quality to provide more holistic instructor support. In addition to teaching analytics, institutions typically have data from student evaluations, peer evaluations and/or program assessment. Rather than examining these indicators in isolation, combinations of various analytics may provide insight into specific aspects of instructional effectiveness. For example, active engagement in the online classroom may be analyzed by combining analytic data on the number of instructor posts to the discussion board with student rating data highlighting students’ perception of the value of instructor posts to promoting learning. Similarly, a combination of analytic data on number of instructor posts and number of student posts in an asynchronous discussion thread may provide insight into the ability of the instructor to foster students’ active engagement and ongoing dialogue.
Examining the utilization of instructional artifacts at the macro level, GCU uses teaching analytics to discover trends regarding instructor behavior, course effectiveness and the success of programmatic outcomes. The information collected through student evaluation surveys is used in combination with various classroom data points to provide insight into specific instructors and their effectiveness as a function of student outcomes. Likewise, predictive teaching analytics are utilized at the course or program levels to drive curriculum change. This application of analytics provides GCU with a glimpse into trends reflecting the effectiveness of assignments, topics, objectives and rubric validity to allow for continuous monitoring and adjustment to better serve students.
As LMS metrics continue to advance and become more sophisticated, institutions must look toward the future of teaching analytics. Utilizing analytic data, predictive modeling has the potential to make data-driven decisions that simultaneously benefit the student, instructor and institution. For example, as thematic analysis becomes more sophisticated, analytic-driven assessments can move beyond quantitative indicators of instructional behavior to provide qualitative analysis of instructor-student interactions. Or, the current reliance on standardized evaluation and development processes may be adapted to allow for individual customization. Predictive dashboards have the potential to integrate data on teaching experience, online experience, disciplinary background, past teaching evaluations, instructional behaviors, and so on, to create customized faculty development programming that more efficiently targets instructional resources to support faculty members who need it the most. Likewise, evaluation schedules and strategies can be tailored in response to the needs of each faculty member, course or discipline.
The value of teaching analytics lies in utilizing existing data to efficiently monitor and support institutional standards for online teaching. Through the integration of teaching analytics, institutions can embrace data-driven decision making that maximizes institutional resources by targeting faculty members who are most in need of additional support and providing feedback on online instruction standards in a timely manner that allows faculty members to immediately adjust teaching behaviors. As highlighted by Wagner (2014), “Simply knowing who might be at risk really isn’t enough. If you don’t know what to do to mitigate that risk or to respond to the needs of that student you are really only halfway there.” Integration of teaching analytics in isolation is not sufficient to foster effective teaching, but analytics provides a cost-effective, manageable strategy for identifying instructors in need of additional support and assistance. With a targeted direction, institutions can streamline faculty support services to maximize the learning experience for students in the online classroom.
Wagner, E. 2014. 3 questions to ask before implementing predictive analytics for online student success [audio podcast], August 28. Higher Ed Impact (blog). Academic Impressions, August 28. http://www.academicimpressions.com/news/3-questions-ask-implementing-predictive-analytics-online-student-success.
About the Authors
B. Jean Mandernach is Executive Director, Center for Innovation in Research and Teaching, and Kelly Palese is Vice President, Office of the Chief Academic Officer, at Grand Canyon University in Phoenix, Arizona.