Teacher supported peer evaluation through openanswer: A study of some factors

04 Pubblicazione in atti di convegno
De Marsico M., Sterbini A., Temperini M.
ISSN: 1865-0929

In the OpenAnswer system it is possible to compute grades for/to the answers to open-ended questions given to a class of students, based on the students’ peer-evaluation and on the teacher’s grading work, performed on a subset of the answers. Here we analyze the systems’ performances, expressed as the capability to infer correct grades based on a limited amount of grading work by the teacher. In particular, considering that the performance may well depend on alternative definitions (valorization) of several aspects of the system, we show an analysis of such alternative choices, with the intention of seeing what choices might result in better system’s behavior. The factors we investigate are related to the Bayesian framework underpinning OpenAnswer. In particular we tackle the different possibilities to define probability distribution of key variables, conditional probabilities tables, and methods to map our statistical variables onto usable grades. Moreover we analyze the relationship between two main variables that express knowledge possessed by the student and her/his peer-assessing skill. By exploring alternative configurations of the system’s parameters we can conclude that Knowledge is in general more difficult than Assessment. The way to reach such a (not astonishing) conclusion provides also a quantitative evidence of Bloom’s ranking.

© Università degli Studi di Roma "La Sapienza" - Piazzale Aldo Moro 5, 00185 Roma