We have located links that may give you full text access.
An evaluation of 'ChatGPT' Compared to Dermatological Surgeons' Choice of Reconstruction of Mohs Surgical Defects.
Clinical and Experimental Dermatology 2024 May 13
BACKGROUND: ChatGPT® (OpenAI; California, USA) is an open-access chatbot developed using artificial intelligence (AI) that generates human-like responses.
OBJECTIVE: To evaluate the ChatGPT-4's concordance with three dermatologic surgeons on reconstructions for dermatological surgical defects.
METHODS: A total of 70 cases of non-melanoma skin cancer treated with surgery were obtained from clinical records for analysis. A list of 30 reconstruction options was designed by the main authors which included primary closure, secondary skin closure, skin flaps and skin grafts. Three blinded dermatologic surgeons, along with ChatGPT-4, were asked to select two reconstruction options from the list.
RESULTS: Seventy responses were analyzed using Cohen's kappa looking for concordance between each dermatologist and ChatGPT. The level of agreement among dermatologic surgeons was higher compared to that between dermatologic surgeons and ChatGPT, highlighting differences in decision-making. In the best reconstruction technique, the results indicated a fair level of agreement among the dermatologists ranging between κ 0.268 and 0.331. However, the concordance with ChatGPT-4 and the dermatologists was slight with κ values from 0.107 to 0.121. In the analysis of the second-choice options, the dermatologists showed slight agreement. In contrast, the level of concordance between ChatGPT-4 and the dermatologists was below chance.
CONCLUSIONS: As anticipated, this study reveals variability in medical decisions between dermatologic surgeons and ChatGPT. Although these tools offer exciting possibilities for the future, it's vital to acknowledge the risk of inadvertently rely on non-certified AI for medical advice.
OBJECTIVE: To evaluate the ChatGPT-4's concordance with three dermatologic surgeons on reconstructions for dermatological surgical defects.
METHODS: A total of 70 cases of non-melanoma skin cancer treated with surgery were obtained from clinical records for analysis. A list of 30 reconstruction options was designed by the main authors which included primary closure, secondary skin closure, skin flaps and skin grafts. Three blinded dermatologic surgeons, along with ChatGPT-4, were asked to select two reconstruction options from the list.
RESULTS: Seventy responses were analyzed using Cohen's kappa looking for concordance between each dermatologist and ChatGPT. The level of agreement among dermatologic surgeons was higher compared to that between dermatologic surgeons and ChatGPT, highlighting differences in decision-making. In the best reconstruction technique, the results indicated a fair level of agreement among the dermatologists ranging between κ 0.268 and 0.331. However, the concordance with ChatGPT-4 and the dermatologists was slight with κ values from 0.107 to 0.121. In the analysis of the second-choice options, the dermatologists showed slight agreement. In contrast, the level of concordance between ChatGPT-4 and the dermatologists was below chance.
CONCLUSIONS: As anticipated, this study reveals variability in medical decisions between dermatologic surgeons and ChatGPT. Although these tools offer exciting possibilities for the future, it's vital to acknowledge the risk of inadvertently rely on non-certified AI for medical advice.
Full text links
Related Resources
Get seemless 1-tap access through your institution/university
For the best experience, use the Read mobile app
All material on this website is protected by copyright, Copyright © 1994-2025 by WebMD LLC.
This website also contains material copyrighted by 3rd parties.
By using this service, you agree to our terms of use and privacy policy.
Your Privacy Choices
You can now claim free CME credits for this literature searchClaim now
Get seemless 1-tap access through your institution/university
For the best experience, use the Read mobile app