e-Learning Ecologies MOOC’s Updates
Peer Reviews for Recursive Feedback
The shift of agency in new age learning has led to the load on teacher being split among multiple other actors - including machine tests, machine aided human assessments, peer assessments - among others. In this essay, I explore peer assessments as a source of recursive feedback as well as some pertinent dialogues around peer assessments.
The History
Peer reviews have their roots in academic reviews for journals. The peer review process dates back to early 1700s, with committees being formed to censor papers received by elite journals [1]. The philosophy was to reduce the load on the editors of the journal where the peers handle first level of review and feedback to authors – essentially checking facts, reports etc.,
With time, technology and context, the role of peer reviews have also changed. In today’s times of new e-learning ecologies as well as the affordances they provide, peer reviews are a critical link in assessments and thus formative learning outcomes. Here is what Peter Norvig - one of the pioneers of MOOCs concept had to say about the transformation of classrooms and peers.
Peer review and recursive feedback
Recursive feedback refers to the formative forms of feedback that keep happening through the duration of a course with the aim to provide assessment for learning. Peer reviews have started being widely used in online learning ecologies to grade peers as well as to enable them with a platform for creating knowledge artifacts. The fact that peers are those who have been in similar situations as us, makes their feedback constructive, actionable and direct, rendering the learning experience better. In a research conducted by Katia Muck [2] and team to understand the impact of review process on participant as well as reviewer learnings, it was reported that most participants considered helpful the feedback they received from their reviewers as well as enjoyed the experience of providing feedback through peer reviews to their peers.
Challenges with peer reviews
In recent years, challenges with peer reviews have also gained attention. Here is a short video of the 'types of peer reviewers' we encounter.
Hoi Suen in a 2014 article [3] observes the current pitfalls with peer review systems in MOOCs - the different cultural backgrounds and contexts that learners come from have led to multiple experiences in learners - some encouraging and some discouraging. The three main sources of errors according to him are: basic error of judgment due to insufficient knowledge (inaccuracy), random judgmental error due to idiosyncratic situational factors at the time of judgment (inconsistency), and inability to maintain a constant level of accuracy from context to context (intransferability). He proposes a development of credibility index for each participant which takes care of all these errors within the peer reviewer to help better formative assessment inputs as well as learning outcomes. In another article, authors point out that "learner training on critical reading" is central to improving existing challenges within peer review systems [4].
Concuding remarks
There is no doubt that peer reviews have the potential to provide formative assignments that are prospective and future oriented - the true assessments 'for' learning. However, reviewer training and rubrics are critical to creating a winning peer review mechanism.
References
[1] CDIP Community Commons. (n.d.). Retrieved February 26, 2017, from http://teachingcommons.cdl.edu/cdip/facultyresearch/Historyofpeerreview.html
[2] Muck, K. (2015). The Role of Recursive Feedback: A Case Study of e-Learning in Emergency Operations . The International Journal of Adult, Community and Professional Learning ,23(1).
[3] Suen, H. K. (2014). Peer Assessment for Massive Open Online Courses (MOOCs). The International Review of Research in Open and Distributed Learning,15(3).
[4] Rahman, A., Salih, A., & Samad, M. A. (n.d.). Enhancing Peer Review Through Critical Reading: A learner . Retrieved from http://scholink.org/ojs/index.php/selt/article/viewFile/196/189
That would help probably, although i do think it's hard to say something about motivation levels by adaptive technologies. And because the topics are so different, how is tracking knowledge helping? I think you need at least skills so you're able to recognize a good argumentive update/assignment. But, i would also like some feedback on the content of an assignment. for advanced learning.
I appreciate the discussion of challenges with peer reviews, especially in the MOOC environment. I wonder if the use of adaptive technologies that track student knowledge, motivation levels and preferences can be used to better control for accuracy, consistency and transferability in online peer review?
Thanks for your comment Jeanet.
I agree! Without specific criteria review will not bring what can be expected. I'm wondering, is peer review in a MOOC the same as peer reviewing in a class. And is the relational aspect in reviewing also important, can we neglect that in a MOOC?