Towards pixel-to-pixel deep nucleus detection in microscopy images

BMC Bioinformatics. 2019;20(1):1-16 DOI 10.1186/s12859-019-3037-5

 

Journal Homepage

Journal Title: BMC Bioinformatics

ISSN: 1471-2105 (Online)

Publisher: BMC

LCC Subject Category: Medicine: Medicine (General): Computer applications to medicine. Medical informatics | Science: Biology (General)

Country of publisher: United Kingdom

Language of fulltext: English

Full-text formats available: PDF, HTML

 

AUTHORS

Fuyong Xing (Department of Biostatistics and Informatics, and the Data Science to Patient Value initiative, University of Colorado Anschutz Medical Campus)
Yuanpu Xie (J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida)
Xiaoshuang Shi (J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida)
Pingjun Chen (J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida)
Zizhao Zhang (Department of Computer and Information Science and Engineering, University of Florida)
Lin Yang (J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida)

EDITORIAL INFORMATION

Blind peer review

Editorial Board

Instructions for authors

Time From Submission to Publication: 19 weeks

 

Abstract | Full Text

Abstract Background Nucleus is a fundamental task in microscopy image analysis and supports many other quantitative studies such as object counting, segmentation, tracking, etc. Deep neural networks are emerging as a powerful tool for biomedical image computing; in particular, convolutional neural networks have been widely applied to nucleus/cell detection in microscopy images. However, almost all models are tailored for specific datasets and their applicability to other microscopy image data remains unknown. Some existing studies casually learn and evaluate deep neural networks on multiple microscopy datasets, but there are still several critical, open questions to be addressed. Results We analyze the applicability of deep models specifically for nucleus detection across a wide variety of microscopy image data. More specifically, we present a fully convolutional network-based regression model and extensively evaluate it on large-scale digital pathology and microscopy image datasets, which consist of 23 organs (or cancer diseases) and come from multiple institutions. We demonstrate that for a specific target dataset, training with images from the same types of organs might be usually necessary for nucleus detection. Although the images can be visually similar due to the same staining technique and imaging protocol, deep models learned with images from different organs might not deliver desirable results and would require model fine-tuning to be on a par with those trained with target data. We also observe that training with a mixture of target and other/non-target data does not always mean a higher accuracy of nucleus detection, and it might require proper data manipulation during model training to achieve good performance. Conclusions We conduct a systematic case study on deep models for nucleus detection in a wide variety of microscopy images, aiming to address several important but previously understudied questions. We present and extensively evaluate an end-to-end, pixel-to-pixel fully convolutional regression network and report a few significant findings, some of which might have not been reported in previous studies. The model performance analysis and observations would be helpful to nucleus detection in microscopy images.