Please use this identifier to cite or link to this item: http://hdl.handle.net/11434/1441
Title: Evaluation of a tool to assess non-technical skills in ICU.
Epworth Authors: Kelly, D.
McKenzie, Dean
Hanlon, Gabrielle
Mackley, L.
Barrett, Jonathan
Keywords: Non-Technical Skills
NTS
Patient Outcomes
Critical Incidents
NTS Assessment Rubric
Reliability
Intensive Care Registrars
Medical Students
Medical Emergencies
Critical Care Clinical Institute, Epworth HealthCare, Victoria, Australia
Issue Date: Jun-2018
Conference Name: Epworth HealthCare Research Week 2018
Conference Location: Epworth Research Institute, Victoria, Australia
Abstract: Background Deficiencies in doctors’ non-technical skills (NTS) contribute to critical incidents and poor patient outcomes, but it is unclear how to reliably assess them. We developed a standardised NTS assessment rubric and evaluated its reliability. Methods Prospective observational study to evaluate inter-rater reliability of a NTS assessment rubric. Intensive Care Registrars and medical students participated in high-fidelity, immersive, in-situ simulated scenarios of medical emergencies. Following a short period of calibration, two Intensive Care Consultants independently viewed the videoed scenarios and scored each scenario leader using the assessment rubric. The primary outcome was inter-rater reliability of the overall score. Secondary outcomes included inter-rater reliability of the 5 domains and 14 individual questions. Results 40 scenarios were videoed, including 5 for consultant calibration. The mean(SD) score for rater A was 12.7(4.0) vs 13.0(4.8) for rater B; Lin’s concordance coefficient 0.74(95% CI 0.60 to 0.89). Inter-rater agreement for the domains and individual questions was assessed using Cohen’s kappa. Mean kappas for domains varied from 0.36(fair) to 0.64(substantial) and kappas for individual questions varied from 0.15(slight) to 0.75(substantial). Conclusion The NTS assessment rubric demonstrated good correlation between raters in relation to overall score. However, there was variability in agreement for individual questions ranging from slight to substantial. Overall the tool shows promise, but further refinement is required for individual questions.
URI: http://hdl.handle.net/11434/1441
Type: Conference Poster
Type of Clinical Study or Trial: Prospective Observational Study
Appears in Collections:Critical Care
Research Week

Files in This Item:
There are no files associated with this item.


Items in Epworth are protected by copyright, with all rights reserved, unless otherwise indicated.