Searching across hundreds of databases

Our searching services are busy right now. Your search will reload in five seconds.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

Interactive Machine Learning-Based Multi-Label Segmentation of Solid Tumors and Organs.

Applied sciences (Basel, Switzerland) | 2021

We seek the development and evaluation of a fast, accurate, and consistent method for general-purpose segmentation, based on interactive machine learning (IML). To validate our method, we identified retrospective cohorts of 20 brain, 50 breast, and 50 lung cancer patients, as well as 20 spleen scans, with corresponding ground truth annotations. Utilizing very brief user training annotations and the adaptive geodesic distance transform, an ensemble of SVMs is trained, providing a patient-specific model applied to the whole image. Two experts segmented each cohort twice with our method and twice manually. The IML method was faster than manual annotation by 53.1% on average. We found significant (p < 0.001) overlap difference for spleen (DiceIML/DiceManual = 0.91/0.87), breast tumors (DiceIML/DiceManual = 0.84/0.82), and lung nodules (DiceIML/DiceManual = 0.78/0.83). For intra-rater consistency, a significant (p = 0.003) difference was found for spleen (DiceIML/DiceManual = 0.91/0.89). For inter-rater consistency, significant (p < 0.045) differences were found for spleen (DiceIML/DiceManual = 0.91/0.87), breast (DiceIML/DiceManual = 0.86/0.81), lung (DiceIML/DiceManual = 0.85/0.89), the non-enhancing (DiceIML/DiceManual = 0.79/0.67) and the enhancing (DiceIML/DiceManual = 0.79/0.84) brain tumor sub-regions, which, in aggregation, favored our method. Quantitative evaluation for speed, spatial overlap, and consistency, reveals the benefits of our proposed method when compared with manual annotation, for several clinically relevant problems. We publicly release our implementation through CaPTk (Cancer Imaging Phenomics Toolkit) and as an MITK plugin.

Pubmed ID: 34621541 RIS Download

Research resources used in this publication

None found

Additional research tools detected in this publication

Antibodies used in this publication

None found

Associated grants

  • Agency: NCI NIH HHS, United States
    Id: P30 CA008748
  • Agency: NINDS NIH HHS, United States
    Id: R01 NS042645
  • Agency: NCI NIH HHS, United States
    Id: U01 CA242871
  • Agency: NCI NIH HHS, United States
    Id: U24 CA189523

Publication data is provided by the National Library of Medicine ® and PubMed ®. Data is retrieved from PubMed ® on a weekly schedule. For terms and conditions see the National Library of Medicine Terms and Conditions.

This is a list of tools and resources that we have found mentioned in this publication.


SciPy (tool)

RRID:SCR_008058

A Python-based environment of open-source software for mathematics, science, and engineering. The core packages of SciPy include: NumPy, a base N-dimensional array package; SciPy Library, a fundamental library for scientific computing; and IPython, an enhanced interactive console.

View all literature mentions

OpenCV (tool)

RRID:SCR_015526

Computer vision and machine learning software library which provides a common infrastructure for computer vision applications. The algorithms within the library can be used to detect and recognize faces, identify objects, classify human actions in videos, track camera movements and moving objects, extract 3D models of objects, produce 3D point clouds from stereo cameras, stitch images together to produce a high resolution image of an entire scene, find similar images from an image database, and follow eye movements, recognize scenery and establish markers to overlay it with augmented reality. It has C++, C, Python, Java and MATLAB interfaces.

View all literature mentions