We have located links that may give you full text access.
Comparison of ChatGPT knowledge against 2020 consensus statement on ankyloglossia in children.
International Journal of Pediatric Otorhinolaryngology 2024 April 17
OBJECTIVE: This paper evaluates ChatGPT's accuracy and consistency in providing information on ankyloglossia, a congenital oral condition. Assessing alignment with expert consensus, the study explores potential implications for patients relying on AI for medical information.
METHODS: Statements from the 2020 clinical consensus statement on ankyloglossia were presented to ChatGPT, and its responses were scored using a 9-point Likert scale. The study analyzed the mean and standard deviation of ChatGPT scores for each statement. Statistical analysis was conducted using Excel.
RESULTS: Among the 63 statements assessed, 67 % of ChatGPT responses closely aligned with expert consensus mean scores. However, 17 % (11/63) were statements in which the ChatGPT mean response was different from the CCS mean by 2.0 or greater, raising concerns about ChatGPT's potential influence in disseminating uncertain or debated medical information. Variations in mean scores highlighted discrepancies, with some statements showing significant deviations from expert opinions.
CONCLUSION: While ChatGPT mirrored medical viewpoints on ankyloglossia, alignment with non-consensus statements raises caution in relying on it for medical advice. Future research should refine AI models, address inaccuracies, and explore diverse user queries for safe integration into medical decision-making. Despite potential benefits, ongoing examination of ChatGPT's power and limitations is crucial, considering its impact on health equity and information access.
METHODS: Statements from the 2020 clinical consensus statement on ankyloglossia were presented to ChatGPT, and its responses were scored using a 9-point Likert scale. The study analyzed the mean and standard deviation of ChatGPT scores for each statement. Statistical analysis was conducted using Excel.
RESULTS: Among the 63 statements assessed, 67 % of ChatGPT responses closely aligned with expert consensus mean scores. However, 17 % (11/63) were statements in which the ChatGPT mean response was different from the CCS mean by 2.0 or greater, raising concerns about ChatGPT's potential influence in disseminating uncertain or debated medical information. Variations in mean scores highlighted discrepancies, with some statements showing significant deviations from expert opinions.
CONCLUSION: While ChatGPT mirrored medical viewpoints on ankyloglossia, alignment with non-consensus statements raises caution in relying on it for medical advice. Future research should refine AI models, address inaccuracies, and explore diverse user queries for safe integration into medical decision-making. Despite potential benefits, ongoing examination of ChatGPT's power and limitations is crucial, considering its impact on health equity and information access.
Full text links
Related Resources
Get seemless 1-tap access through your institution/university
For the best experience, use the Read mobile app
All material on this website is protected by copyright, Copyright © 1994-2024 by WebMD LLC.
This website also contains material copyrighted by 3rd parties.
By using this service, you agree to our terms of use and privacy policy.
Your Privacy Choices
You can now claim free CME credits for this literature searchClaim now
Get seemless 1-tap access through your institution/university
For the best experience, use the Read mobile app