Central venous catheterization (CVC) is a commonly performed procedure. Technical competence in its insertion is critical for patient safety reasons. It is unclear how competence should be diagnosed. Using a unified framework of validity that outlines five sources of validity evidence, results from three published studies are presented on the use of a formative simulation-based examination, assessing for technical competence in video-recorded performances of CVC by medical trainees.
Validity evidence based on content: study one evaluated all available published assessment tools on CVC performances. Items on each tool were classified into competency themes.
Response process: study two compared the reliability measures of assessment data of 18 video-recorded performances with direct observation, rated by two independent, trained raters. Adequacy of video recordings was reviewed qualitatively.
Internal structure: study three used principal component analysis to assess for the dimensions assessed by an 8-item global rating scale.
Relationship with other variables: scores rated by two checklists were correlated with global assessment, confidence and number of needle attempts.
Consequences of testing: the ability of checklists to identify procedural competence was assessed.
Of the 25 published checklists identified, only six (20%) assessed each of the seven competency domains. The most frequently under-represented domains were “team working” and “communication with the patient.”
Assessments between video-recorded performances and direct observations were comparable. However, wire handling was not fully captured in 13 of the 18 videos (72%). Of these, 5 (38%) were considered to have impacted rating. Drape handling was not fully captured in 17 videos (94%) and felt to be consequential to rating in 9 (53%).
Two dimensions were identified in the global rating scale: technical ability and procedural safety, accounting for 84.1% of the overall variance. For both checklists, scores correlated positively with weighted factor scores on technical ability, but negatively with scores on safety. Both checklists demonstrated lower specificity than sensitivity in the diagnosis of competence, and high checklist scores did not preclude incompetence.
Together, these three studies presented validity evidence from multiple sources that support the use of the simulation-based examination in identifying trainees who may benefit from further training.