Frontiers in Education (Jan 2019)

Automated Feedback Can Improve Hypothesis Quality

  • Karel A. Kroeze,
  • Karel A. Kroeze,
  • Stéphanie M. van den Berg,
  • Ard W. Lazonder,
  • Bernard P. Veldkamp,
  • Ton de Jong

DOI
https://doi.org/10.3389/feduc.2018.00116
Journal volume & issue
Vol. 3

Abstract

Read online

Stating a hypothesis is one of the central processes in inquiry learning, and often forms the starting point of the inquiry process. We designed, implemented, and evaluated an automated parsing and feedback system that informed students about the quality of hypotheses they had created in an online tool, the hypothesis scratchpad. In two pilot studies in different domains (“supply and demand” from economics and “electrical circuits” from physics) we determined the parser's accuracy by comparing its judgments with those of human experts. A satisfactory to high accuracy was reached. In the main study (in the “electrical circuits” domain), students were assigned to one of two conditions: no feedback (control) and automated feedback. We found that the subset of students in the experimental condition who asked for automated feedback on their hypotheses were much more likely to create a syntactically correct hypothesis than students in either condition who did not ask for feedback.

Keywords