BMC Medical Informatics and Decision Making (Jan 2022)

Comparison of different feature extraction methods for applicable automated ICD coding

  • Zhao Shuai,
  • Diao Xiaolin,
  • Yuan Jing,
  • Huo Yanni,
  • Cui Meng,
  • Wang Yuxin,
  • Zhao Wei

DOI
https://doi.org/10.1186/s12911-022-01753-5
Journal volume & issue
Vol. 22, no. 1
pp. 1 – 15

Abstract

Read online

Abstract Background Automated ICD coding on medical texts via machine learning has been a hot topic. Related studies from medical field heavily relies on conventional bag-of-words (BoW) as the feature extraction method, and do not commonly use more complicated methods, such as word2vec (W2V) and large pretrained models like BERT. This study aimed at uncovering the most effective feature extraction methods for coding models by comparing BoW, W2V and BERT variants. Methods We experimented with a Chinese dataset from Fuwai Hospital, which contains 6947 records and 1532 unique ICD codes, and a public Spanish dataset, which contains 1000 records and 2557 unique ICD codes. We designed coding tasks with different code frequency thresholds (denoted as $$f_s$$ f s ), with a lower threshold indicating a more complex task. Using traditional classifiers, we compared BoW, W2V and BERT variants on accomplishing these coding tasks. Results When $$f_s$$ f s was equal to or greater than 140 for Fuwai dataset, and 60 for the Spanish dataset, the BERT variants with the whole network fine-tuned was the best method, leading to a Micro-F1 of 93.9% for Fuwai data when $$f_s=200$$ f s = 200 , and a Micro-F1 of 85.41% for the Spanish dataset when $$f_s=180$$ f s = 180 . When $$f_s$$ f s fell below 140 for Fuwai dataset, and 60 for the Spanish dataset, BoW turned out to be the best, leading to a Micro-F1 of 83% for Fuwai dataset when $$f_s=20$$ f s = 20 , and a Micro-F1 of 39.1% for the Spanish dataset when $$f_s=20$$ f s = 20 . Our experiments also showed that both the BERT variants and BoW possessed good interpretability, which is important for medical applications of coding models. Conclusions This study shed light on building promising machine learning models for automated ICD coding by revealing the most effective feature extraction methods. Concretely, our results indicated that fine-tuning the whole network of the BERT variants was the optimal method for tasks covering only frequent codes, especially codes that represented unspecified diseases, while BoW was the best for tasks involving both frequent and infrequent codes. The frequency threshold where the best-performing method varied differed between different datasets due to factors like language and codeset.

Keywords