COMPARATIVE STUDY
JOURNAL ARTICLE
Add like
Add dislike
Add to saved papers

Exploring the process of final year objective structured clinical examination for improving the quality of assessment.

OBJECTIVE: To explore the process of final year Objective Structured Clinical Examination (OSCE) for improving the quality of assessment.

METHODS: The analytical cross-sectional study was conducted with purposive sampling on one-year Medicine Objective Structured Clinical Examination (OSCE) scores of Final Year batch of 2009 at the Shifa College of Medicine, Islamabad. The scores from December 2008 to December 2009 of 77 Final Year students were analysed. The process of examination and the interpretation of the scores was evaluated using the Standards for Educational and Psychological Testing as the conceptual framework for validity testing which identifies five sources of test validity evidence. Internal consistency reliability of the examination was determined by Cronbach's alpha. Comparison and Correlation between students' end-of-clerkship (EOC) and end-of-year (EOY) examination scores were analysed by paired sample t-test and Pearson's Correlation Coefficient respectively.

RESULTS: There was no significant positive correlation between the scores of end-of-clerkship and end-of-year Medicine Objective Structured Clinical Examination. Overall, the students' performance in the former segment was better. Evaluation of exam stations showed that mean scores significantly decreased in almost all end-of-year stations. Reliability decreased from 0.53 in the former to 0.48 in the latter assessment. Validity evidence showed that content validity was established by blueprinting of the objective exam. Response process evidence revealed that checklists, response key and rating scale were discussed with the faculty observing the stations. However, lack of other important sources of validity like standard setting for pass/fail criteria and poor reliability are serious threats to the validity of such exam scores.

CONCLUSIONS: Multiple sources of validity evidence are needed to make defensible assumptions from performance scores. Consideration of all sources and threats to validity evidence is important to improve the quality of assessment.

Full text links

We have located links that may give you full text access.
Can't access the paper?
Try logging in through your university/institutional subscription. For a smoother one-click institutional access experience, please use our mobile app.

For the best experience, use the Read mobile app

Mobile app image

Get seemless 1-tap access through your institution/university

For the best experience, use the Read mobile app

All material on this website is protected by copyright, Copyright © 1994-2024 by WebMD LLC.
This website also contains material copyrighted by 3rd parties.

By using this service, you agree to our terms of use and privacy policy.

Your Privacy Choices Toggle icon

You can now claim free CME credits for this literature searchClaim now

Get seemless 1-tap access through your institution/university

For the best experience, use the Read mobile app