We have located links that may give you full text access.
Fully automated segmentation and volumetric measurement of ocular adnexal lymphoma by deep learning-based self-configuring nnU-net on multi-sequence MRI: a multi-center study.
Neuroradiology 2024 July 17
PURPOSE: To evaluate nnU-net's performance in automatically segmenting and volumetrically measuring ocular adnexal lymphoma (OAL) on multi-sequence MRI.
METHODS: We collected T1-weighted (T1), T2-weighted and T1-weighted contrast-enhanced images with/without fat saturation (T2_FS/T2_nFS, T1c_FS/T1c_nFS) of OAL from four institutions. Two radiologists manually annotated lesions as the ground truth using ITK-SNAP. A deep learning framework, nnU-net, was developed and trained using two models. Model 1 was trained on T1, T2, and T1c, while Model 2 was trained exclusively on T1 and T2. A 5-fold cross-validation was utilized in the training process. Segmentation performance was evaluated using the Dice similarity coefficient (DSC), sensitivity, and positive prediction value (PPV). Volumetric assessment was performed using Bland-Altman plots and Lin's concordance correlation coefficient (CCC).
RESULTS: A total of 147 patients from one center were selected as training set and 33 patients from three centers were regarded as test set. For both Model 1 and 2, nnU-net demonstrated outstanding segmentation performance on T2_FS with DSC of 0.80-0.82, PPV of 84.5-86.1%, and sensitivity of 77.6-81.2%, respectively. Model 2 failed to detect 19 cases of T1c, whereas the DSC, PPV, and sensitivity for T1_nFS were 0.59, 91.2%, and 51.4%, respectively. Bland-Altman plots revealed minor tumor volume differences with 0.22-1.24 cm3 between nnU-net prediction and ground truth on T2_FS. The CCC were 0.96 and 0.93 in Model 1 and 2 for T2_FS images, respectively.
CONCLUSION: The nnU-net offered excellent performance in automated segmentation and volumetric assessment in MRI of OAL, particularly on T2_FS images.
METHODS: We collected T1-weighted (T1), T2-weighted and T1-weighted contrast-enhanced images with/without fat saturation (T2_FS/T2_nFS, T1c_FS/T1c_nFS) of OAL from four institutions. Two radiologists manually annotated lesions as the ground truth using ITK-SNAP. A deep learning framework, nnU-net, was developed and trained using two models. Model 1 was trained on T1, T2, and T1c, while Model 2 was trained exclusively on T1 and T2. A 5-fold cross-validation was utilized in the training process. Segmentation performance was evaluated using the Dice similarity coefficient (DSC), sensitivity, and positive prediction value (PPV). Volumetric assessment was performed using Bland-Altman plots and Lin's concordance correlation coefficient (CCC).
RESULTS: A total of 147 patients from one center were selected as training set and 33 patients from three centers were regarded as test set. For both Model 1 and 2, nnU-net demonstrated outstanding segmentation performance on T2_FS with DSC of 0.80-0.82, PPV of 84.5-86.1%, and sensitivity of 77.6-81.2%, respectively. Model 2 failed to detect 19 cases of T1c, whereas the DSC, PPV, and sensitivity for T1_nFS were 0.59, 91.2%, and 51.4%, respectively. Bland-Altman plots revealed minor tumor volume differences with 0.22-1.24 cm3 between nnU-net prediction and ground truth on T2_FS. The CCC were 0.96 and 0.93 in Model 1 and 2 for T2_FS images, respectively.
CONCLUSION: The nnU-net offered excellent performance in automated segmentation and volumetric assessment in MRI of OAL, particularly on T2_FS images.
Full text links
Related Resources
Get seemless 1-tap access through your institution/university
For the best experience, use the Read mobile app
All material on this website is protected by copyright, Copyright © 1994-2024 by WebMD LLC.
This website also contains material copyrighted by 3rd parties.
By using this service, you agree to our terms of use and privacy policy.
Your Privacy Choices
You can now claim free CME credits for this literature searchClaim now
Get seemless 1-tap access through your institution/university
For the best experience, use the Read mobile app