Jason Berg’s Updates

Crowdsource Peer-to-Peer Assessment in MOOCs?

Student assessment occupies much of the space within educational policy and leaning philosophy discussions these days. The issue poses a particularly tough challenge for the various massive open online course initiatives, MOOCs. The major platforms are expanding into course areas where check-the-box, right and wrong answers don't provide meaningful feedback. Now the search is on for appropriate models that can do a good job of providing feedback and ratings for student work. Anne Margulies of Harvard University & edX said recently that assessment models for non-technical courses "Is probably the hottest topic on campus right now".

The issue isn't limited to humanities courses either. Rita van Haren, Curriculum Director at Common Ground Publishing, has observed that the problem of knowing what students have learned is acute in every domain. High scores do not necessarily demonstrate that students have achieved the course objectives. Courses with quantifiably right or wrong answers focus on scores despite issues of validity and reliability with multiple choice tests.

The question then is this: What structure ensures all students reach mastery in specified learning objectives? Mille Davis at the National Council of Teachers of English says the research on effective writing interventions is clear--the challenge is the implementation. Establishing a pedagogically appropriate environment in any class is a challenge, one that is magnified online when relationships are no longer clearly established. Peer-to peer feedback, metacognition of success criteria, and targeted instruction are all well established strategies but we must return to issues of implementation; how do we scale to MOOC level?

Peer-to-peer feedback shows promise as it will scale, assuming the infrastructure will allow it. Various platforms, most notably Coursera, are in the process of implementing various forms of peer assessment and the best will have some level of scaffolding that prepares students to provide educationally appropriate and constructive feedback. Research at the University of Illinois suggests that feedback is least helpful when simple statements such as "good job" and others that are non-specific and positive are given. In these environments constructing appropriate peer review criteria becomes incredibly important.

Students are capable of giving relevant feedback on well thought-out review criteria and, with guidance and practice, they will be better able to provide and use feedback. The benefit then is that when students, over a sustained period of time, are given opportunities to give and respond to feedback, they benefit from powerful processes that foster learning such as metacognitive awareness and collaborative cognition. Writing for peers can also increase motivation for students, providing improved results. And with clearly specified review criteria for each rating level, the reliability of student scores also increases - hence the possibility of crowdsourcing assessment.

Bill Cope and Mary Kalantzis in the College of Education and the University of Illinois have written two relevant white papers exploring the theoretical basis for online learning environments such as Scholar:

e-Affordances: Or, How Can Learning Be Different in a ‘Social Knowledge’ Space? Link: http://goo.gl/K4oKYI

Towards a New Learning: The ‘Scholar’ Social Knowledge Workspace, in Theory and Practice: http://goo.gl/yjpOua