PLoS ONE (Jan 2023)

Accelerating L1-penalized expectation maximization algorithm for latent variable selection in multidimensional two-parameter logistic models.

  • Laixu Shang,
  • Ping-Feng Xu,
  • Na Shan,
  • Man-Lai Tang,
  • George To-Sum Ho

DOI
https://doi.org/10.1371/journal.pone.0279918
Journal volume & issue
Vol. 18, no. 1
p. e0279918

Abstract

Read online

One of the main concerns in multidimensional item response theory (MIRT) is to detect the relationship between observed items and latent traits, which is typically addressed by the exploratory analysis and factor rotation techniques. Recently, an EM-based L1-penalized log-likelihood method (EML1) is proposed as a vital alternative to factor rotation. Based on the observed test response data, EML1 can yield a sparse and interpretable estimate of the loading matrix. However, EML1 suffers from high computational burden. In this paper, we consider the coordinate descent algorithm to optimize a new weighted log-likelihood, and consequently propose an improved EML1 (IEML1) which is more than 30 times faster than EML1. The performance of IEML1 is evaluated through simulation studies and an application on a real data set related to the Eysenck Personality Questionnaire is used to demonstrate our methodologies.