↓ Skip to main content

Patient-specific and global convolutional neural networks for robust automatic liver tumor delineation in follow-up CT studies

Overview of attention for article published in Medical & Biological Engineering & Computing, March 2018
Altmetric Badge

Mentioned by

twitter
2 X users

Citations

dimensions_citation
38 Dimensions

Readers on

mendeley
76 Mendeley
Title
Patient-specific and global convolutional neural networks for robust automatic liver tumor delineation in follow-up CT studies
Published in
Medical & Biological Engineering & Computing, March 2018
DOI 10.1007/s11517-018-1803-6
Pubmed ID
Authors

Refael Vivanti, Leo Joskowicz, Naama Lev-Cohain, Ariel Ephrat, Jacob Sosna

Abstract

Radiological longitudinal follow-up of tumors in CT scans is essential for disease assessment and liver tumor therapy. Currently, most tumor size measurements follow the RECIST guidelines, which can be off by as much as 50%. True volumetric measurements are more accurate but require manual delineation, which is time-consuming and user-dependent. We present a convolutional neural networks (CNN) based method for robust automatic liver tumor delineation in longitudinal CT studies that uses both global and patient specific CNNs trained on a small database of delineated images. The inputs are the baseline scan and the tumor delineation, a follow-up scan, and a liver tumor global CNN voxel classifier built from radiologist-validated liver tumor delineations. The outputs are the tumor delineations in the follow-up CT scan. The baseline scan tumor delineation serves as a high-quality prior for the tumor characterization in the follow-up scans. It is used to evaluate the global CNN performance on the new case and to reliably predict failures of the global CNN on the follow-up scan. High-scoring cases are segmented with a global CNN; low-scoring cases, which are predicted to be failures of the global CNN, are segmented with a patient-specific CNN built from the baseline scan. Our experimental results on 222 tumors from 31 patients yield an average overlap error of 17% (std = 11.2) and surface distance of 2.1 mm (std = 1.8), far better than stand-alone segmentation. Importantly, the robustness of our method improved from 67% for stand-alone global CNN segmentation to 100%. Unlike other medical imaging deep learning approaches, which require large annotated training datasets, our method exploits the follow-up framework to yield accurate tumor tracking and failure detection and correction with a small training dataset. Graphical abstract Flow diagram of the proposed method. In the offline mode (orange), a global CNN is trained as a voxel classifier to segment liver tumor as in [31]. The online mode (blue) is used for each new case. The input is baseline scan with delineation and the follow-up CT scan to be segmented. The main novelty is the ability to predict failures by trying the system on the baseline scan and the ability to correct them using the patient-specific CNN.

X Demographics

X Demographics

The data shown below were collected from the profiles of 2 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 76 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 76 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 22 29%
Student > Master 9 12%
Researcher 7 9%
Student > Postgraduate 5 7%
Other 4 5%
Other 9 12%
Unknown 20 26%
Readers by discipline Count As %
Computer Science 17 22%
Medicine and Dentistry 14 18%
Engineering 6 8%
Agricultural and Biological Sciences 5 7%
Physics and Astronomy 2 3%
Other 7 9%
Unknown 25 33%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 1. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 13 November 2018.
All research outputs
#19,951,180
of 25,382,440 outputs
Outputs from Medical & Biological Engineering & Computing
#1,796
of 2,053 outputs
Outputs of similar age
#256,996
of 349,574 outputs
Outputs of similar age from Medical & Biological Engineering & Computing
#10
of 14 outputs
Altmetric has tracked 25,382,440 research outputs across all sources so far. This one is in the 18th percentile – i.e., 18% of other outputs scored the same or lower than it.
So far Altmetric has tracked 2,053 research outputs from this source. They receive a mean Attention Score of 3.8. This one is in the 11th percentile – i.e., 11% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 349,574 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 21st percentile – i.e., 21% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 14 others from the same source and published within six weeks on either side of this one. This one is in the 28th percentile – i.e., 28% of its contemporaries scored the same or lower than it.