We have located links that may give you full text access.
Using Large Language Models to Generate Educational Materials on Childhood Glaucoma.
American Journal of Ophthalmology 2024 April 12
PURPOSE: To evaluate the quality, readability, and accuracy of large language model (LLM) generated patient education materials (PEMs) on childhood glaucoma, and their ability to improve existing online information's readability.
DESIGN: Cross-sectional comparative study.
METHODS: We evaluated responses of ChatGPT-3.5, ChatGPT-4, and Bard to three separate prompts requesting they write PEMs on "childhood glaucoma." Prompt A required PEMs be "easily understandable by the average American." Prompt B required PEMs be written "at a 6th -grade level using Simple Measure of Gobbledygook (SMOG) readability formula." We then compared responses' quality (DISCERN questionnaire, Patient Education Materials Assessment Tool (PEMAT)), readability (SMOG, Flesch-Kincaid Grading Level (FKGL)), and accuracy (Likert Misinformation scale). To assess the improvement of readability for existing online information, Prompt C requested LLM rewrite 20 resources from a Google search of keyword "childhood glaucoma" to the American Medical Association-recommended "6th -grade level." Rewrites were compared on key metrics such as readability, complex words (≥3 syllables), and sentence count.
RESULTS: All 3 LLM generated PEMs that were of high quality, understandability, and accuracy (DISCERN≥4, ≥70% PEMAT understandability, Misinformation score=1). Prompt B responses were more readable than Prompt A responses for all 3 LLM (p≤0.001). ChatGPT-4 generated the most readable PEMs compared to ChatGPT-3.5 and Bard (p≤0.001). Although Prompt C responses showed consistent reduction of mean SMOG and FKGL scores, only ChatGPT-4 achieved the specified 6th -grade reading level (4.8 ± 0.8 and 3.7 ± 1.9, respectively).
CONCLUSION: LLMs can serve as strong supplementary tools in generating high quality, accurate, and novel PEMs, and improving the readability of existing PEMs on childhood glaucoma.
DESIGN: Cross-sectional comparative study.
METHODS: We evaluated responses of ChatGPT-3.5, ChatGPT-4, and Bard to three separate prompts requesting they write PEMs on "childhood glaucoma." Prompt A required PEMs be "easily understandable by the average American." Prompt B required PEMs be written "at a 6th -grade level using Simple Measure of Gobbledygook (SMOG) readability formula." We then compared responses' quality (DISCERN questionnaire, Patient Education Materials Assessment Tool (PEMAT)), readability (SMOG, Flesch-Kincaid Grading Level (FKGL)), and accuracy (Likert Misinformation scale). To assess the improvement of readability for existing online information, Prompt C requested LLM rewrite 20 resources from a Google search of keyword "childhood glaucoma" to the American Medical Association-recommended "6th -grade level." Rewrites were compared on key metrics such as readability, complex words (≥3 syllables), and sentence count.
RESULTS: All 3 LLM generated PEMs that were of high quality, understandability, and accuracy (DISCERN≥4, ≥70% PEMAT understandability, Misinformation score=1). Prompt B responses were more readable than Prompt A responses for all 3 LLM (p≤0.001). ChatGPT-4 generated the most readable PEMs compared to ChatGPT-3.5 and Bard (p≤0.001). Although Prompt C responses showed consistent reduction of mean SMOG and FKGL scores, only ChatGPT-4 achieved the specified 6th -grade reading level (4.8 ± 0.8 and 3.7 ± 1.9, respectively).
CONCLUSION: LLMs can serve as strong supplementary tools in generating high quality, accurate, and novel PEMs, and improving the readability of existing PEMs on childhood glaucoma.
Full text links
Related Resources
Trending Papers
Renin-Angiotensin-Aldosterone System: From History to Practice of a Secular Topic.International Journal of Molecular Sciences 2024 April 5
Get seemless 1-tap access through your institution/university
For the best experience, use the Read mobile app
All material on this website is protected by copyright, Copyright © 1994-2024 by WebMD LLC.
This website also contains material copyrighted by 3rd parties.
By using this service, you agree to our terms of use and privacy policy.
Your Privacy Choices
You can now claim free CME credits for this literature searchClaim now
Get seemless 1-tap access through your institution/university
For the best experience, use the Read mobile app