What Does Preliminary Review Completed Means for Fiu Med School
Family unit Medicine Clerkship Assessment Analysis: Application and Implications
Sarah E. Stumbar, MD, MPH
Herbert Wertheim Higher of Medicine, Florida International University, Miami, FL
Rodolfo Bonnin, PhD
Herbert Wertheim College of Medicine, Florida International University, Miami, FL
Abstract
Introduction
Clerkship assessment structures should consist of a systematic process that includes information from exam and assignment data to legitimize pupil grades and achievement. Analyzing student performance beyond assessments, rather than on a single consignment, provides a more than accurate picture show to identify academically at-risk students. This paper presents the development and implications of a structured approach to cess analysis for the Family Medicine Clerkship at Florida International University Herbert Wertheim Higher of Medicine.
Methods
The cess analysis included a table presenting the distribution of all assessment operation results for 166 clerkship students from April 2018 to June 2019. A correlation table showed linear relationships between performance on all graded activities. We conducted a Pearson analysis (r), coefficient of conclusion (r 2), multiple regression assay, and reliability of performance analysis.
Results
Functioning on 1 assessment—the core skills quiz—yielded a statistically significant correlation (r=.409, r ii= .16, P<.001) with the final clerkship grade. The reliability of performance analyses showed low performers (<−1.vii SD), had both a depression mean quiz score (59.6) and final class (83). Top performers (>−1.vii SD) had both a high mean quiz score (88.5) and final course (99.6). This was confirmed by multiple regression analysis.
Conclusion
The assessment assay revealed a stiff linear relationship between the core skills quiz and final class; this relationship did not exist for other assignments. In response to the cess analysis, the clerkship adapted the grading weight of its assignments to reflect their utility in differentiating academic functioning and implemented faculty development regarding grading for multiple assignments.
Introduction
Identifying low-performing students early in medical school can help facilitate early implementation of boosted academic support services. As such, assessments that finer measure out medical knowledge related to a learning experience are disquisitional.1 , 2 Low-pale assessments like quizzes suffer from a lack of student motivation and may non exist a quality measure.3 Notwithstanding, as academic performance amongst medical students is relatively consistent and predictive,four low-stake assessments are tools for early on identification of academically at-take chances students; fiddling scholarship exists on this.5 , half dozen
Medical schools in the United States are highly selective, thereby creating a homogeneous population of academically successful medical studentsvii , viii who should perform in a consistent manner from assessment to assessment.9 , 10 The content of clerkship assessments measures knowledge related to a set of learning objectives (eg, family unit medicine clerkship curriculum),xi which makes quizzes and other assessments integrated as related measures where performance should be consistent.
Multiple assessments within a clerkship measure out knowledge near one domain. A stiff positive relationship among assessments supports convergent validity as knowledge measures,11 while statistical correlations between cess outcomes predict learning.12 Therefore, assessment quality should be viewed as a holistic organisation.thirteen Analyzing student functioning across assessments provides a more accurate motion-picture show that identifies at-risk students and predicts later functioning. This paper presents the cess analysis approach used to evaluate the relationships between Florida International Academy Herbert Wertheim Higher of Medicine (FIU HWCOM)'due south Family Medicine Clerkship's graded assignments and final class.
Methods
We conducted an assessment analysis for 166 students who completed the required 8-week, third-year family medicine clerkship between April 2018 and June 2019. The FIU HWCOM Family Medicine Clerkship includes multiple graded assignments that are due throughout the clerkship (Table 1).
Table ane
Assignments | Percent of Last Grade | Due Engagement (Week ofClerkship) | Description |
---|---|---|---|
Core skills quiz | 4 | one | Thirteen multiple choice questions that cover topics such as antibiotics, asthma, urology, dermatology, and gynecology. Supplements a hands-on skills simulation action such every bit use of metered dose inhaler, urine dipstick analysis, prostate and cervical exam. |
Patient notation | 3 | 2 | A full written history and physical; followed by several reflective questions regarding students' functioning in the meet. |
Student pedagogy | 10 | 2–6 | Interactive fifteen-minute educatee teaching presentation on an assigned core family medicine topic, such as hypertension, GERD, constipation. |
Narrative medicine assignment | iv | half dozen | A kinesthesia led small group and written narrative essay, which ask students to reflect on a meaning patient encounter. |
Clinical assessment of pupil performance (CASP) | 25 | 8 | Preceptor's clinical evaluation of educatee |
Stop of clerkship cess (EOCA)* | 17 | 8 | 3-station OSCE, requiring students to evaluate a standardized patient, write a patient annotation, and nowadays the instance to a kinesthesia member. |
NBME Subject field Examination | xxx | 8 | National board of medical examiners family medicine subject exam |
The statistical foundation of the cess analysis model is based on linear associations betwixt the outcomes of the assessments. Students who perform well on one cess should perform well on others. Even with individual variability, performance should remain relatively stable upwardly and down the operation scale. We conducted a Pearson correlation (r) and coefficient of determination (r ii), resulting in a table that provides a statistical interpretation of the relationships.
In addition to the correlation table, we created a line chart to confirm linear relationships. This chart serves as a visual feedback mechanism to place characteristics of the outcomes including spreads in scores, group outliers, and relationships with other assessments. Finally, we performed a multiple regression analysis. We obtained institutional review board exemption for this written report.
Results
The analysis revealed that only the cadre skills quiz had a statistically significant relationship with student performance across all other assessments and with the final grade (r=.409, r 2= .167, P<.0001; Table 2). The results are supported in Figure 1, which shows the performance relationships across the clerkship assessments.
Table 2
Family Medicine Clerkship | Core Skills Quiz (Pearson r | r two) | Clinical Assessment of Student Performance (Pearson r | r 2) | End-of-Clerkship Assessment (Pearson r | r 2) | NBME Family Medicine Subject (Pearson r | r 2) | Clerkship Score (Pearson r | r two) | |||||
---|---|---|---|---|---|---|---|---|---|---|
Core skills quiz | 1 | |||||||||
Clinical assessment of educatee performance | 0.16994 | 0.0289 | i | |||||||
P=.0286 | ||||||||||
Terminate-of-clerkship assessment* | 0.23052 | 0.0531 | 0.09625 | 0.0093 | 1 | |||||
P=.0026 | P=.2174 | |||||||||
NBME family unit medicine subject area | 0.18003 | 0.0324 | 0.13633 | 0.0186 | 0.25809 | 0.0666 | ane | |||
P=.0195 | P=.0799 | P=.0007 | ||||||||
Clerkship score | 0.40902 | 0.1673 | 0.40597 | 0.1648 | 0.64863 | 0.4207 | 0.48937 | 0.2395 | 1 | |
P<.0001 | P<.0001 | P<.0001 | P<.0001 |
To business relationship for individual factors related to student functioning variability, the stratified averages are based on the clerkship grade. Each data point represents the mean score of sixteen students arranged from peak to bottom; the bottom six groups each include 17 students. The operation outcomes for the core skills quiz and final grade evidence a consistent or stable pattern of performance with only one prepare of means with crossover. Additionally, in that location is a distinct visual separation between the bottom group of students and the side by side group upwardly, assuasive for easy identification of the group of students most at-run a risk for poor performance. Furthermore, for the eight students who were the everyman performers, scoring below −one.7 SD from the cohort mean, there was an average core skills quiz score of 59.6, compared to the cohort mean of 78.5 (excluding the bottom eight scores). The mean last grade was 83, compared to the cohort mean of 92.6 (excluding the lesser eight scores). Students scoring above −1.7 SD had a mean core skills quiz score of 78.5 and mean final grade of 92.6.
Nosotros conducted a multiple regression assay to examine the relationship betwixt the final clerkship class and diverse potential predictors such every bit the cadre skills quiz, Clinical Assessment of Student Performance (CASP), Terminate of Clerkship Assessment (EOCA), and National Board of Medical Examiners (NBME) field of study exam (Table iii).
Table 3
Descriptive Statistics | |||||
---|---|---|---|---|---|
Variable | Northward | Hateful | SD | Minimum | Maximum |
Core skills quiz | 168 | 77.56 | xiv.30 | 31 | 100 |
CASP Final Score | 166 | 96.91 | 3.45 | 85 | 100 |
EOCA Last Score | 168 | 89.10 | 5.30 | 73 | 100 |
NBME FM Discipline | 168 | 87.39 | 5.27 | 75 | 100 |
Final Clerkship Course | 168 | 92.x | 3.27 | 75 | 99 |
The multiple regression model with all iv predictors produced R 2=.659, F(4, 161)=77.62, P<.0001. The scores of the core skills quiz, CASP, EOCA, and NBME bailiwick exam had significant positive regression weights, indicating students with higher scores on these scales are expected to have college final grades, after controlling for other variables in the model (Tabular array iv).
Table iv
Beta | P Value | CI | |
---|---|---|---|
| |||
Intercept | nineteen.261 | .000 | 9.386 – 29.136 |
Core Skills Quiz | 0.044 | <.0001 | 0.022 – 0.066 |
CASP Final Score | 0.272 | <.0001 | 0.184 – 0.360 |
EOCA Final Score | 0.312 | <.0001 | 0.252 – 0.371 |
NBME FM Subject field | 0.175 | <.0001 | 0.115 – 0.235 |
| |||
Observations | 166 | ||
R 2/adjusted R 2 | 0.659/0.650 | ||
F-exam | 77.62 |
Our results show, for each additional i-bespeak score in the core skills quiz, the average concluding grade is expected to increase by 0.044, assuming that NBME bailiwick exam, CASP, and EOCA last scores remain abiding. The coefficients from the output of this model can exist used to create the post-obit estimated regression equation: Final Clerkship Form=19.261+0.044*Core Skills Quiz+ 0.272*CASP Concluding Score+0.312*EOCA Final Score+0.175*NBME subject exam, and predict the terminal clerkship grade for a student, based on the scores of the core skills quiz, CASP, EOCA and NBME subject exam.
Conclusions
The assessment analysis revealed a strong linear human relationship between the core skills quiz and the last clerkship grade; this did not be for other assignments. The core skills quiz is an objective measure of medical knowledge, whereas other assignments worth less than ten% of the last grade are subjective, and all students tend to perform well on them.
The regression analysis farther confirmed the predictive nature of the cadre skills quiz to the concluding grade forth with its relationship to other assessments within the clerkship.eleven Fifty-fifty though the core skills quiz is weighted at 4% of the overall grade, our results betoken that it contributes meaningfully to the assessment of students' achievement of the clerkship learning objectives.
Through the cess assay, it became clear that, while subjective assignments are important to students' learning, they do non assist in predicting lower-performing students warranting early intervention past faculty. However, the core skills quiz can serve every bit an evidence-based predictor to afterward operation. This system could be used to identify academically at-chance students early in the clerkship, thereby allowing kinesthesia to identify poor performance, arbitrate, and monitor the student's operation. Students tin be informed of the core skills quiz's predictive nature and have fourth dimension to refocus their studying.14
This report had limitations. While the core skills quiz assesses medical knowledge, it is not a straight predictor of clinical skills, which are evaluated through the CASP and EOCA. Yet, the cadre skills quiz had a potent statistical clan with these clinical functioning assessments. Unsurprisingly, students with high levels of medical noesis are also loftier performers in the clinical setting. Some other limitation was this included an test of the relationships of assessments in one clerkship at ane medical school, and these specific findings are not generalizable to clerkships at other medical schools. Withal, our methodology, as outlined above, could easily be implemented for other clerkships and dissimilar institutions.
Every bit a result of the assessment analysis, the clerkship adjusted the grading weight of its assignments to more accurately reverberate their utility in differentiating academic performance. Faculty development was implemented regarding grading for the subjective assignments, with additional adjustments made to the assessment rubrics to help in stratifying performance.
The model used in this study provides a user-friendly approach to identifying and predicting pupil performance. Observing the statistical associations between assessments can besides serve as an additional feedback mechanism to enhance cess quality. A measurable understanding of how assessments and the final clerkship grades are statistically associated with one another can exist used to further develop curricular plans.
References
one. Downing SM. The furnishings of violating standard item writing principles on tests and students: the consequences of using flawed test items on achievement examinations in medical education. Adv Health Sci Educ Theory Pract. 2005;10(2):133–143. doi: 10.1007/s10459-004-4019-5. [PubMed] [CrossRef] [Google Scholar]
2. Vanderbilt AA, Feldman M, Wood IK. Assessment in undergraduate medical didactics: a review of class exams. Med Educ Online. 2013;18(1):ane–v. doi: ten.3402/meo.v18i0.20438. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
three. Cole JS, Bergin DA, Whittaker TA. Predicting pupil achievement for depression stakes tests with effort and task value. Contemp Educ Psychol. 2008;33(4):609–624. doi: 10.1016/j.cedpsych.2007.10.002. [CrossRef] [Google Scholar]
four. Griffin B, Bayl-Smith P, Hu W. Predicting patterns of alter and stability in student performance across a medical degree. Med Educ. 2018;52(four):438–446. doi: 10.1111/medu.13508. [PubMed] [CrossRef] [Google Scholar]
5. Milton O, Pollio Hr, Eison JA. Making Sense of College Grades. San Francisco: Jossey-Bass Publishers; 1986. [Google Scholar]
6. Franke M. Final exam weighting as part of grade design. Teach Larn Inq. 2018;6(i):91–103. doi: 10.20343/teachlearninqu.half dozen.1.9. [CrossRef] [Google Scholar]
seven. Businesswoman T, Grossman RI, Abramson SB, et al. Signatures of medical student applicants and academic success. PLoS Ane. 2020;15(ane):e0227108. doi: ten.1371/periodical.pone.0227108. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
8. Scott JN, Markert RJ, Dunn MM. Critical thinking: change during medical school and relationship to performance in clinical clerkships. Med Educ. 1998;32(ane):14–18. doi: x.1046/j.1365-2923.1998.00701.x. [PubMed] [CrossRef] [Google Scholar]
9. Schuwirth LW, Van der Vleuten CP. Programmatic assessment: from cess of learning to assessment for learning. Med Teach. 2011;33(half dozen):478–485. doi: 10.3109/0142159X.2011.565828. [PubMed] [CrossRef] [Google Scholar]
10. van der Vleuten CP, Schuwirth LW, Driessen EW, et al. A model for programmatic assessment fit for purpose. Med Teach. 2012;34(3):205–214. doi: 10.3109/0142159X.2012.652239. [PubMed] [CrossRef] [Google Scholar]
11. Downing SM. Validity: on meaningful interpretation of assessment data. Med Educ. 2003;37(9):830–837. doi: ten.1046/j.1365-2923.2003.01594.10. [PubMed] [CrossRef] [Google Scholar]
12. Goldsmith TE, Jognson PJ, Acton WH. Assessing structural knowledge. J Educ Psychol. 1991;83(1):88–96. doi: 10.1037/0022-0663.83.ane.88. [CrossRef] [Google Scholar]
xiii. Crowe A, Dirks C, Wenderoth MP. Biology in flower: implementing Bloom's Taxonomy to enhance educatee learning in biology. CBE Life Sci Educ. 2008;7(4):368–381. doi: 10.1187/cbe.08-05-0024. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
14. Alharbi Z, Cornford J, Dolder L, De La Iglesia B. Using data mining techniques to predict students at take a chance of poor functioning. SAI Calculating Conference (SAI) 2016:523–531. doi: ten.1109/SAI.2016.7556030. [CrossRef] [Google Scholar]
Articles from PRiMER : Peer-Review Reports in Medical Education Research are provided here courtesy of Society of Teachers of Family Medicine
joneswillearrimay.blogspot.com
Source: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8612593/
0 Response to "What Does Preliminary Review Completed Means for Fiu Med School"
Post a Comment