Online Peer Feedback

Ertmer, P. A., Richardson, J. C., Belland, B., Camin, D., Connolly, P., Coulthard, G., . . . , & Mong, C. (2007). Using peer feedback to enhance the quality of student online postings: An exploratory study. Journal of Computer-Mediated Communication, 12(2), 412-33.

Ertmer et al., constructed a comprehensive study to examine students’ perceptions of the value of giving and receiving peer feedback on discussion postings in an online course. The authors deemed this study important because in the mid-2000’s, limited research had been conducted to examine either the impact of feedback in online learning or the impact of using peer feedback to shape the quality of discourse in an online learning course. Ertmer et al., posited the following research questions:

  • What is the impact of peer feedback on the quality of students’ postings in an online environment?
  • Can the quality of discourse/learning be maintained and/or increased through the use of peer feedback?
  • What are students’ perceptions of the value of receiving peer feedback?
  • How do these perceptions compare to the perceived value of receiving instructor feedback?
  • What are students’ perceptions of the value of giving peer feedback?

To support their research questions, the authors performed a detailed literature review revolving around key themes to shape their study including the following: role of feedback in instruction, role of feedback in online environments, advantages of using peer feedback, and the challenges of using peer feedback. Ertmer et al., used a case study framework to conduct their research on a semester-long, graduate course. The authors used both descriptive and evaluative approaches to examine fifteen participants’ perceptions of the value of the peer feedback process and evaluated the impact of the process on the quality of students’ postings (p. 416). For this study, a numerical score was assigned to each posting. Bloom’s taxonomy was selected as a means for determining posting quality because of the familiarity of that tool with the students. In addition, a rubric was constructed to serve as a concrete tool for both student and instructor feedback scoring of weekly postings. Once completed, all peer feedback, was channeled through the instructors prior to being distributed (p. 418). Operating in this manner provided the instructors the opportunity to review the student feedback, handle any problematic student feedback, and ensure study anonymity. For data purposes, quantitative and qualitative data were collected via student interviews, rubric ratings on weekly discussion posts as well as responses to entry and exit surveys. The study findings indicated that although participants’ perceptions of the importance of feedback in an online course significantly increased from the beginning of the course to the end, the participants continued to believe that instructor feedback was more important than peer feedback (p. 425).

This study has significant merit to the current and future landscape of peer feedback as a value-added instructional strategy for online learning courses. In order to better understand the value however, it is important to take a step back and review what makes peer review as a process so important instructionally. Peer review is closely related to self-assessment and encourages students to take an active, reflective role in learning, which promotes advanced critical thinking and higher-order cognitive skills (Lui & Carless, 2006; Topping, 1998). Furthermore, as a peer reviewer, students develop problem-solving skills because they must analyze, clarify and correct each other’s work through identifying areas needing improvement and providing constructive recommendations (Dochy et al., 1999; Somerville, 1993). From the opposite lens of peer reviewee, students must digest a diversity of viewpoints which helps them to clarify their grasp of the content and enhances their ability to select ‘good evidence’ (Biggs & Tang, 2007, p. 187). It is through these meaningful interactions with peers that students can grow an impressive array of skills that not only impact their classroom learning, but also their future workplace environment. Thus, the “why” of peer feedback is grounded in solid research footing. The answer to the “how” becomes a critical question that must be addressed by future researchers soon in order to change the mindset of online learners and their uncertain perceptions about the effectiveness of peer feedback.

From an online educator perspective, this study serves as an important reminder for the need to fully plan and prepare before undertaking and utilizing this type of feedback in a digital environment. Not only is the instructor workload impacted, but there are other critical components of the online peer feedback process that need to be addressed up front. These components include development of a peer review protocol and rubric, beginning of course training for students on the process and tools, finding a software tool or web-based tool that can capture all postings and feedback efficiently with anonymity, determining the role grading will play, etc. Through reviewing this study, and other current online peer review studies, the educational impacts can be quite positive for the students. These include gaining diverse perspectives on the content being studied, receiving timely and frequent feedback, developing a deeper level of content understanding, growing 21st Century skills that are valued in the workplace, etc. In the end, several studies have shown that peers are capable of providing reliable feedback that is of equal value to that provided by the instructor (Cho et al., 2006; Gielen et al., 2010). The online peer review process is quite complex and definitely merits further studies to more fully understand the interplay between the students, instructors, the learning, the class work, the grades, and tech technology tools to move this value-added pedagogical practice forward.

Works Cited

  • Biggs, J., & Tang C. (2007). Teaching for Quality Learning at University. Berkshire: Open University Press.
  • Cho, K., Schunn, C. D., & Wilson, R. W. (2006). Validity and reliability of scaffolded peer assessment in writing from instructor and student perspectives. Journal of Educational Pyschology, 98(4), 891-901.
  • Dochy, F., Segers, M., & Sluijsmans, D. (1999). The use of self-, peer and co-assessment in higher education: A review. Studies in Higher Education, 24(3), 331-350.
  • Gielen S., Tops, L., Dochy, F., Onghena, P., & Smeets, S. (2010). A comparative study of peer and teacher feedback and of various peer feedback forms in a secondary school writing curriculum. British Educational Research Journal, 36(1), 143-62.
  • Liu, N. F., & Carless, D. (2006). Peer feedback: The learning element of peer assessment. Teaching in Higher Education, 11(3), 279-90.
  • Somerville, H. (1993). Issues in assessment, enterprise and higher education: The case of self-, peer and collaborative assessment. Assessment & Evaluation in Higher Education, 18(3), 221-33.
  • Topping, K. (1998). Peer assessment between students in colleges and universities. Review of Educational Research, 68(3), 249-276.