We have located links that may give you full text access.
Conversion of single-energy computed tomography to parametric maps of dual-energy computed tomography using convolutional neural network.
British Journal of Radiology 2024 April 11
OBJECTIVES: We propose a deep learning (DL) multi-task learning framework using convolutional neural network (CNN) for a direct conversion of single-energy CT (SECT) to three different parametric maps of dual-energy CT (DECT): Virtual-monochromatic image (VMI), effective atomic number (EAN), and relative electron density (RED).
METHODS: We propose VMI-Net for conversion of SECT to 70, 120, and 200 keV VMIs. In addition, EAN-Net and RED-Net were also developed to convert SECT to EAN and RED. We trained and validated our model using 67 patients collected between 2019 and 2020. SECT images with 120 kVp acquired by the DECT (IQon spectral CT, Philips) were used as input, while the VMIs, EAN, and RED acquired by the same device were used as target. The performance of the DL framework was evaluated by absolute difference (AD) and relative difference (RD).
RESULTS: The VMI-Net converted 120 kVp SECT to the VMIs with AD of 9.02 Hounsfield Unit, and RD of 0.41% compared to the ground truth VMIs. The ADs of the converted EAN and RED were 0.29 and 0.96, respectively, while the RDs were 1.99% and 0.50% for the converted EAN and RED, respectively.
CONCLUSIONS: SECT images were directly converted to the three parametric maps of DECT (ie, VMIs, EAN, and RED). By using this model, one can generate the parametric information from SECT images without DECT device. Our model can help investigate the parametric information from SECT retrospectively.
ADVANCES IN KNOWLEDGE: Deep learning framework enables converting SECT to various high-quality parametric maps of DECT.
METHODS: We propose VMI-Net for conversion of SECT to 70, 120, and 200 keV VMIs. In addition, EAN-Net and RED-Net were also developed to convert SECT to EAN and RED. We trained and validated our model using 67 patients collected between 2019 and 2020. SECT images with 120 kVp acquired by the DECT (IQon spectral CT, Philips) were used as input, while the VMIs, EAN, and RED acquired by the same device were used as target. The performance of the DL framework was evaluated by absolute difference (AD) and relative difference (RD).
RESULTS: The VMI-Net converted 120 kVp SECT to the VMIs with AD of 9.02 Hounsfield Unit, and RD of 0.41% compared to the ground truth VMIs. The ADs of the converted EAN and RED were 0.29 and 0.96, respectively, while the RDs were 1.99% and 0.50% for the converted EAN and RED, respectively.
CONCLUSIONS: SECT images were directly converted to the three parametric maps of DECT (ie, VMIs, EAN, and RED). By using this model, one can generate the parametric information from SECT images without DECT device. Our model can help investigate the parametric information from SECT retrospectively.
ADVANCES IN KNOWLEDGE: Deep learning framework enables converting SECT to various high-quality parametric maps of DECT.
Full text links
Related Resources
Trending Papers
Renin-Angiotensin-Aldosterone System: From History to Practice of a Secular Topic.International Journal of Molecular Sciences 2024 April 5
Albumin: a comprehensive review and practical guideline for clinical use.European Journal of Clinical Pharmacology 2024 April 13
Revascularization Strategy in Myocardial Infarction with Multivessel Disease.Journal of Clinical Medicine 2024 March 27
Clinical practice guidelines on the management of status epilepticus in adults: A systematic review.Epilepsia 2024 April 13
Interstitial Lung Disease: A Review.JAMA 2024 April 23
Get seemless 1-tap access through your institution/university
For the best experience, use the Read mobile app
All material on this website is protected by copyright, Copyright © 1994-2024 by WebMD LLC.
This website also contains material copyrighted by 3rd parties.
By using this service, you agree to our terms of use and privacy policy.
Your Privacy Choices
You can now claim free CME credits for this literature searchClaim now
Get seemless 1-tap access through your institution/university
For the best experience, use the Read mobile app