Back to Annual Meeting Program
Tracking the Learning Curve in Microsurgical Skill Acquisition
Jesse C. Selber, MD, MPH, Edward Chang, MD, Jun Liu, PhD, Hiroo Suami, MD, PhD, David Adelman, Md. PhD, Patrick Garvey, MD, Matthew Hanasono, MD, Charles Butler, MD. M,D, Anderson Cancer Center, Houston, TX, USA.
Purpose: Despite advances in surgical training, microsurgery is still based on an apprenticeship model. In the context of increasing work hour restrictions and an emphasis on outcomes in health care reform, it is necessary to standardize surgical training and supply meaningful training endpoints. To this end, we set out to evaluate skill acquisition and apply targeted feedback in our microsurgical training model. First, we validated the Structured Assessment of Microsurgery Skills (SAMS) in the lab. Then, we applied this validated instrument to our microsurgical training program. We hypothesized that subjects would demonstrate measurable overall and category specific improvement in performance throughout the study period, and consistently across evaluators. Methods: The SAMS instrument consists of 12 items in four areas: dexterity, visuo-spatial ability, operative flow, and judgment. To validate the SAMS instrument in our environment, rodent femoral artery anastomoses were performed by a fellow cohort at the beginning and end of a year-long training period. These sessions were de-identified and evaluated by five blinded plastic surgeons using SAMS. Inter and intra-rater reliability was calculated. In parallel, the same fellow cohort was evaluated using SAMS during 118 microsurgical clinical cases by 14 faculty evaluators during intervals at the beginning, middle and end of the 2011 training year. Cases were distributed evenly over evaluators and anatomic region. Primary outcomes included change in category specific and overall scores between evaluation periods, and inter-evaluator reliability. Results: In the laboratory validation study, all skills were significantly (p<0.05) or marginally (0.05≤p<0.10) improved in each fellow between the beginning and end of the training year, according to SAMS. The overall inter-evaluator reliability of SAMS was acceptable (α=0.72). In the clinical arm of the study, from the beginning to the middle of the year, all category specific skill areas and overall performance significantly improved in each fellow. Between the middle and end of the year, most skill areas improved, but only a few significantly. Overall visuo-spatial ability significantly improved (diff=0.28, p=0.01), as did knot technique and suture placement (0.34, p=0.03 and 0.37, p=0.05, respectively). The scores of speed and patency, as well as overall performance and indicative skill scores, demonstrated slight, non-statistically significant decreases (0.07, 0.06, 0.004, and 0.02, respectively). Operative errors decreased significantly between the first and subsequent periods (81 vs. 36; p<0.05). Conclusions: SAMS has content and construct validity for assessing microsurgical skill. Its modular structure allows the trainers to provide customized, targeted feedback with acceptable inter-evaluator reliability. In the first half of the training year, the microsurgical fellows’ skills increased significantly, but plateaued thereafter. This corresponds to a learning curve in trainees at the fellow level of approximately 50 microsurgical cases. The implications for training endpoints, credentialing and recertification of skill are discussed. The use of SAMS is anticipated to help standardize microsurgical training.
Back to Annual Meeting Program
|