We have located links that may give you full text access.
Career caseload predicts interobserver agreement on the final classification of a mammogram.
Journal of Medical Imaging and Radiation Oncology 2019 Februrary 2
INTRODUCTION: Differences in radiologists' experience can potentially introduce interobserver variability in reading mammograms. This work investigated the effect of radiologists' experience on agreement on mammographic final classification.
METHODS: This was a cross-sectional study. Seventeen radiologists were asked to provide their final impression on 60 mammogram cases. Experience parameters included breast subspecialty, years reading mammograms, cases read per year and career caseload. Career caseload was calculated by multiplying years reading mammograms by the average number of cases read per year. The interobserver agreement was calculated using Cohen kappa (κ). The difference in κ between radiologists' groups was compared using the independent-sample t-test and analysis of variance.
RESULTS: The average interobserver agreement was 0.25 (fair). A small difference was found in favour of breast radiologists against general radiologists (κ = 0.21 and 0.29, respectively, P = 0.019). Years reading mammograms and cases read per year did not seem to significantly affect the interobserver agreement (P = 0.056 and 0.273 respectively). Radiologist who had career caseload of at least 2500 cases showed significantly higher consistency than those who read less. κ for radiologists who had career caseload of 2500-4000 cases and >4000 cases was 0.33 and 0.28, respectively, whereas for <2500 κ was 0.17 (P = 0.001).
CONCLUSION: A fair level of interobserver agreement on the final classification of a mammogram was demonstrated. Career caseload was the most important experience parameter to associate with the interobserver agreement. Training strategies aiming to increase radiologists' career caseload may be beneficial.
METHODS: This was a cross-sectional study. Seventeen radiologists were asked to provide their final impression on 60 mammogram cases. Experience parameters included breast subspecialty, years reading mammograms, cases read per year and career caseload. Career caseload was calculated by multiplying years reading mammograms by the average number of cases read per year. The interobserver agreement was calculated using Cohen kappa (κ). The difference in κ between radiologists' groups was compared using the independent-sample t-test and analysis of variance.
RESULTS: The average interobserver agreement was 0.25 (fair). A small difference was found in favour of breast radiologists against general radiologists (κ = 0.21 and 0.29, respectively, P = 0.019). Years reading mammograms and cases read per year did not seem to significantly affect the interobserver agreement (P = 0.056 and 0.273 respectively). Radiologist who had career caseload of at least 2500 cases showed significantly higher consistency than those who read less. κ for radiologists who had career caseload of 2500-4000 cases and >4000 cases was 0.33 and 0.28, respectively, whereas for <2500 κ was 0.17 (P = 0.001).
CONCLUSION: A fair level of interobserver agreement on the final classification of a mammogram was demonstrated. Career caseload was the most important experience parameter to associate with the interobserver agreement. Training strategies aiming to increase radiologists' career caseload may be beneficial.
Full text links
Related Resources
Trending Papers
Challenges in Septic Shock: From New Hemodynamics to Blood Purification Therapies.Journal of Personalized Medicine 2024 Februrary 4
Molecular Targets of Novel Therapeutics for Diabetic Kidney Disease: A New Era of Nephroprotection.International Journal of Molecular Sciences 2024 April 4
Perioperative echocardiographic strain analysis: what anesthesiologists should know.Canadian Journal of Anaesthesia 2024 April 11
The 'Ten Commandments' for the 2023 European Society of Cardiology guidelines for the management of endocarditis.European Heart Journal 2024 April 18
Get seemless 1-tap access through your institution/university
For the best experience, use the Read mobile app
All material on this website is protected by copyright, Copyright © 1994-2024 by WebMD LLC.
This website also contains material copyrighted by 3rd parties.
By using this service, you agree to our terms of use and privacy policy.
Your Privacy Choices
You can now claim free CME credits for this literature searchClaim now
Get seemless 1-tap access through your institution/university
For the best experience, use the Read mobile app