Algorithms (Mar 2023)

Learning Distributed Representations and Deep Embedded Clustering of Texts

  • Shuang Wang,
  • Amin Beheshti,
  • Yufei Wang,
  • Jianchao Lu,
  • Quan Z. Sheng,
  • Stephen Elbourn,
  • Hamid Alinejad-Rokny

DOI
https://doi.org/10.3390/a16030158
Journal volume & issue
Vol. 16, no. 3
p. 158

Abstract

Read online

Instructors face significant time and effort constraints when grading students’ assessments on a large scale. Clustering similar assessments is a unique and effective technique that has the potential to significantly reduce the workload of instructors in online and large-scale learning environments. By grouping together similar assessments, marking one assessment in a cluster can be scaled to other similar assessments, allowing for a more efficient and streamlined grading process. To address this issue, this paper focuses on text assessments and proposes a method for reducing the workload of instructors by clustering similar assessments. The proposed method involves the use of distributed representation to transform texts into vectors, and contrastive learning to improve the representation that distinguishes the differences among similar texts. The paper presents a general framework for clustering similar texts that includes label representation, K-means, and self-organization map algorithms, with the objective of improving clustering performance using Accuracy (ACC) and Normalized Mutual Information (NMI) metrics. The proposed framework is evaluated experimentally using two real datasets. The results show that self-organization maps and K-means algorithms with Pre-trained language models outperform label representation algorithms for different datasets.

Keywords