Improving peer assessment modeling of teacher’s grades, the case of OpenAnswer
Questions with answers are rarely used as e-learning assessment tools because of the resulting high workload for the teacher/tutor that should grade them, which can be mitigated by doing peer-assessment. In OpenAnswer we modeled peer-assessment as a Bayesian network connecting the sub-networks representing any participating student to the corresponding answers of her graded peers. The model has shown good ability to predict (without further info from the teacher) the exact teacher’s grade (ground truth) from the peer grades only, and a very good ability to predict it within 1 mark from the right one. In this paper we explore changes to the OpenAnswer model to improve its predictions. The experimental results, obtained by simulating teacher’s grading on real datasets, show the improved predictions.