We have located links that may give you full text access.
JOURNAL ARTICLE
VALIDATION STUDY
Experts' responses in script concordance tests: a response process validity investigation.
Medical Education 2019 July
CONTEXT: The script concordance test (SCT), designed to measure clinical reasoning in complex cases, has recently been the subject of several critical research studies. Amongst other issues, response process validity evidence remains lacking. We explored the response processes of experts on an SCT scoring panel to better understand their seemingly divergent beliefs about how new clinical data alter the suitability of proposed actions within simulated patient cases.
METHODS: A total of 10 Argentine gastroenterologists who served as the expert panel on an existing SCT re-answered 15 cases 9 months after their original panel participation. They then answered questions probing their reasoning and reactions to other experts' perspectives.
RESULTS: The experts sometimes noted they would not ordinarily consider the actions proposed for the cases at all (30/150 instances [20%]) or would collect additional data first (54/150 instances [36%]). Even when groups of experts agreed about how new clinical data in a case affected the suitability of a proposed action, there was often disagreement (118/133 instances [89%]) about the suitability of the proposed action before the new clinical data had been introduced. Experts reported confidence in their responses, but showed limited consistency with the responses they had given 9 months earlier (linear weighted kappa = 0.33). Qualitative analyses showed nuanced and complex reasons behind experts' responses, revealing, for example, that experts often considered the unique affordances and constraints of their varying local practice environments when responding. Experts generally found other experts' alternative responses moderately compelling (mean ± standard deviation 2.93 ± 0.80 on a 5-point scale, where 3 = moderately compelling). Experts switched their own preferred responses after seeing others' reasoning in 30 of 150 (20%) instances.
CONCLUSIONS: Expert response processes were not consistent with the classical interpretation and use of SCT scores. However, several fruitful and justifiable alternatives for the use of SCT-like methods are proposed, such as to guide assessments for learning.
METHODS: A total of 10 Argentine gastroenterologists who served as the expert panel on an existing SCT re-answered 15 cases 9 months after their original panel participation. They then answered questions probing their reasoning and reactions to other experts' perspectives.
RESULTS: The experts sometimes noted they would not ordinarily consider the actions proposed for the cases at all (30/150 instances [20%]) or would collect additional data first (54/150 instances [36%]). Even when groups of experts agreed about how new clinical data in a case affected the suitability of a proposed action, there was often disagreement (118/133 instances [89%]) about the suitability of the proposed action before the new clinical data had been introduced. Experts reported confidence in their responses, but showed limited consistency with the responses they had given 9 months earlier (linear weighted kappa = 0.33). Qualitative analyses showed nuanced and complex reasons behind experts' responses, revealing, for example, that experts often considered the unique affordances and constraints of their varying local practice environments when responding. Experts generally found other experts' alternative responses moderately compelling (mean ± standard deviation 2.93 ± 0.80 on a 5-point scale, where 3 = moderately compelling). Experts switched their own preferred responses after seeing others' reasoning in 30 of 150 (20%) instances.
CONCLUSIONS: Expert response processes were not consistent with the classical interpretation and use of SCT scores. However, several fruitful and justifiable alternatives for the use of SCT-like methods are proposed, such as to guide assessments for learning.
Full text links
Get seemless 1-tap access through your institution/university
For the best experience, use the Read mobile app
All material on this website is protected by copyright, Copyright © 1994-2024 by WebMD LLC.
This website also contains material copyrighted by 3rd parties.
By using this service, you agree to our terms of use and privacy policy.
Your Privacy Choices
You can now claim free CME credits for this literature searchClaim now
Get seemless 1-tap access through your institution/university
For the best experience, use the Read mobile app