Scientific Reports (Nov 2022)

Contrast phase recognition in liver computer tomography using deep learning

  • Bruno Aragão Rocha,
  • Lorena Carneiro Ferreira,
  • Luis Gustavo Rocha Vianna,
  • Luma Gallacio Gomes Ferreira,
  • Ana Claudia Martins Ciconelle,
  • Alex Da Silva Noronha,
  • João Martins Cortez Filho,
  • Lucas Salume Lima Nogueira,
  • Jean Michel Rocha Sampaio Leite,
  • Maurício Ricardo Moreira da Silva Filho,
  • Claudia da Costa Leite,
  • Marcelo de Maria Felix,
  • Marco Antônio Gutierrez,
  • Cesar Higa Nomura,
  • Giovanni Guido Cerri,
  • Flair José Carrilho,
  • Suzane Kioko Ono

DOI
https://doi.org/10.1038/s41598-022-24485-y
Journal volume & issue
Vol. 12, no. 1
pp. 1 – 12

Abstract

Read online

Abstract Hepatocellular carcinoma (HCC) has become the 4th leading cause of cancer-related deaths, with high social, economical and health implications. Imaging techniques such as multiphase computed tomography (CT) have been successfully used for diagnosis of liver tumors such as HCC in a feasible and accurate way and its interpretation relies mainly on comparing the appearance of the lesions in the different contrast phases of the exam. Recently, some researchers have been dedicated to the development of tools based on machine learning (ML) algorithms, especially by deep learning techniques, to improve the diagnosis of liver lesions in imaging exams. However, the lack of standardization in the naming of the CT contrast phases in the DICOM metadata is a problem for real-life deployment of machine learning tools. Therefore, it is important to correctly identify the exam phase based only on the image and not on the exam metadata, which is unreliable. Motivated by this problem, we successfully created an annotation platform and implemented a convolutional neural network (CNN) to automatically identify the CT scan phases in the HCFMUSP database in the city of São Paulo, Brazil. We improved this algorithm with hyperparameter tuning and evaluated it with cross validation methods. Comparing its predictions with the radiologists annotation, it achieved an accuracy of 94.6%, 98% and 100% in the testing dataset for the slice, volume and exam evaluation, respectively.