IEEE Access (Jan 2019)

Knowledge Enhanced Quality Estimation for Crowdsourcing

  • Shaofei Wang,
  • Depeng Dang,
  • Zixian Guo,
  • Chuangxia Chen,
  • Wenhui Yu

DOI
https://doi.org/10.1109/ACCESS.2019.2932149
Journal volume & issue
Vol. 7
pp. 106694 – 106704

Abstract

Read online

Estimating the quality of answers is one of the challenges in crowdsourcing. The previous methods focus on the quality estimation for objective tasks, whereas subjective tasks, as a common type of crowdsourcing tasks, have not been well studied. In this paper, we focus on the quality estimation for subjective crowdsourcing tasks. Considering the high uncertainty of answers for subjective tasks, in this paper, we propose a background knowledge enhanced quality estimation method. More specifically, first we learn the distributed knowledge representation from knowledge graphs and text corpora by utilizing the multi-task learning framework. Then, we construct a pseudo-gold answer set for each task. Next, by comparing the provided answer with the derived pseudo-gold answer set, we calculate two different scores for each answer: 1) symbolic score, which measures the symbolic similarity and 2) embedding score, which indicates the embedding similarity. Finally, we get the final scores for each answer by combining these two scores. The extensive experiments on both universal and domain-specific crowdsourcing tasks show that our method can obtain better performance than other baselines.

Keywords