Finding Value in the Mid-semester Review of Teaching: Insights from Faculty

Anthony Blash, Belmont University

Beverly Schneller, Kentucky State University





Many faculty members find value in receiving feedback on teaching effectiveness from their students during the semester. Since 1992, Belmont University’s Teaching Center has offered faculty members the opportunity to have a Small Group Instructional Diagnosis (SGID) completed as a mid-semester review of teaching. Using survey data and responses from participants as both reviewers and review subjects, we advocate for a Mid-semester Student Review of Teaching (MSRT) that includes the SGID as part of a more inclusive process.


Changing the Culture of Evaluation


End-of-course student evaluations remain one of the most frequently used systems to gain feedback on teaching effectiveness. Unfortunately, a shared perception among many faculty and students is that end of course evaluation is remote and nearly meaningless from the moment the instrument is opened, either in class or online (for the examination of whether online forms improve response rates see Estelami 2015 and O’Neal-Hixson, et al. 2017). In fact, Stark (2013) claimed that comparing average scores among faculty across schools and programs using “omnibus questions . . . should be avoided entirely.” He continues, “Moreover, we argue that student evaluations of teaching should be only a piece of a much richer assessment of teaching, rather than a focal point.” Weimer (2016) said, “We also need to rewrite the end of course ratings story,” which will require faculty to lead the way in actively re-evaluating how their main work as a professor is assessed by their peers and the stakeholders who are depending upon them. For, as the author of a Queen’s College (CUNY) blog posting observed midterm course evaluations are “by you, for you” (2012) because they allow the faculty member, in their College model, to create their own survey questions, use the results collaboratively to improve the students’ perceptions if not the actual learning experiences before the term is out, and to “get feedback before it is too late for you to do anything about it.”


360 Evaluations as a Proposed New Norm


As a starting point to increase the value of faculty performance reviews, we suggest considering elements of the 360-degree evaluations system commonly used in business and industry. In Human Resources, the 360-degree evaluation is a comprehensive performance review system in which evaluation of job performance includes detailed input from employees across the spectrum of the business or a leaders’ direct (and, at times, indirect) reports. For faculty members, the 360- performance review may include structured evaluative components from colleagues, direct reports, students, and stakeholders. The point is to collect data from multiple voices that reflect different sites of engagement with the subject.


As the results of peer and student evaluations of faculty often determine merit, promotion, tenure, reemployment, and perhaps even the ability to apply for grants or other enrichment opportunities, we asked ourselves if it might be that faculty would prefer to have the SGID process and resulting evaluation incorporated formally into the review process on our campus. Based on surveying our campus faculty who have participated in the MSRT, we believe that the investment of time in this process is worthwhile and would come to be a valued way to address faculty concerns and augment data from standard student evaluations. Given current resistance among both faculty and students to the current SET process, the MSRT would be a way to bring more parity and useful data into the continuous improvement of teaching conversations.


SoTL Findings


We conducted a SoTL project to identify whether faculty feel the MSRT should be incorporated into a 360-informed review of teaching. The review would formalize the relationship between peer review, SET, and the mid-semester review of teaching.

The scope of our IRB-approved research conducted in fall 2017 centered on the participating faculty’s perceptions of, and satisfaction with, the current MSRT process as coordinated by our campus’ Teaching Center.


Faculty recruited by the Teaching Center served as participants in the study and met at least one of the following two criteria within the past 5 years for inclusion in our survey: (1) received an MSRT or (2) performed the MSRT for another faculty member. To preserve anonymity, the Teaching Center staff sent a group email including a link to a Qualtrics® survey to all faculty who had participated in an MSRT within the years of the study, 2011-2016. Faculty members were able to respond uniquely to reflect whether they had been a reviewer or were reviewed. Three reminders to participate were emailed, and the survey was open for seven weeks.


The Qualtrics survey consisted primarily of qualitative items. We developed the questions based on team discussions and feedback from other faculty and students. Through this survey, we wanted to know:


1. If the faculty felt that the MSRT had any value to them as part of the CIT process.

2. If rank and tenure had any effect on how faculty perceived and valued the MSRT.

3. If longevity within the profession had any effect on how faculty perceived and valued the MSRT.

4. The timing of the decision to engage in the review process as a subject or as a peer-reviewer.

5. The overall faculty satisfaction with the MSRT.


Data Analysis and Results


A total of 27 faculty responded to the survey: 17 reviewer responses and 10 from faculty who had been reviewed. The survey results were compiled using the Qualtrics® system and software specifically designed for qualitative data analysis (i.e., NVivo®). Potential themes (or nodes) were uncovered by the NVivo® software application using a word-cloud-word-count (WordArt.com) for each of the survey questions. Quantitative data was analyzed using Statistical Package for the Social Sciences (SPSS®) software.


To test if the faculty felt that the MSRT had any value to them as part of the CIT process, the researchers asked Question #3 (For reviewers only, to what extent were your expectations met), Question #6 (upon what do you believe the MSRT had an impact), and Question #16 (the satisfaction rating of the MSRT process on a scale of 1 to 5 with 5 being the most satisfied). The data from qualitative questions 3 and 6 were analyzed using the Qualtrics® System. Quotations from those reviewers who answered Question #3 can be found in the table below and are mostly positive:




Conclusion and Recommendations


Our research into the reception and perceived value of the MSRT demonstrated that faculty found it useful and transformative in the traditional instructional setting. The MSRT provided insights in graduate and undergraduate courses and was not limited in functionality to a specific field or type of course.


However, peer review of teaching methods has been effected by COVID-19, as many faculty found teaching fully online challenging and yielding mixed results in student learning outcomes (Lau, et al., 2020). This creates an opportunity to migrate the MSRT into the online platform and expand the usefulness of peer review beyond individual instructor evaluations into program review, comparative pedagogies, and assessment of instructional effectiveness across content delivery methods.



Discussion Questions:


1. How are student evaluations of teaching conducted at your college or university? How do students and faculty feel about that process?

2. Does your institution currently offer a mid-semester review of teaching? If so, how is it perceived by faculty at your institution? If not, how do you think it would be received by faculty at your institution?


3. How might evaluative feedback from students be different (and similar) from that of faculty?



References


Estelami, H. (2015). The effects of survey timing on student evaluation of teaching measures obtained using online surveys. Journal of Marketing Education, 37(1), 54-64.


Lau, P. N., Chua, Y.T., Teow, Y., & Xue, X. (2020). Implementing alternative assessment strategies in chemistry amidst COVID-19: Tensions and reflections. Education Sciences, 10(11), 323.


O’Neal- Hixson, K., Long, J., & Brock, M. (2017). The eSGID process: How to improve teaching and learning in online graduate courses. Journal of Effective Teaching, 17(2), 45-57.


A mid-semester COURSE EVALUATION: By you, for you. (October 11, 2012). Retrieved March 8, 2021, from https://teachlearn.commons.gc.cuny.edu/?p=141

Stark, P. (October 14, 2013). Do student evaluations measure teaching effectiveness? www.blogs.berkeley.edu/2013/10/14/do-student-evaluations-measure-teaching-effectiveness/ Retrieved March 8, 2021.