Please use this identifier to cite or link to this item:
http://hdl.handle.net/11434/1441
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kelly, D. | - |
dc.contributor.author | McKenzie, Dean | - |
dc.contributor.author | Hanlon, Gabrielle | - |
dc.contributor.author | Mackley, L. | - |
dc.contributor.author | Barrett, Jonathan | - |
dc.date.accessioned | 2018-07-18T03:19:57Z | - |
dc.date.available | 2018-07-18T03:19:57Z | - |
dc.date.issued | 2018-06 | - |
dc.identifier.uri | http://hdl.handle.net/11434/1441 | - |
dc.description.abstract | Background Deficiencies in doctors’ non-technical skills (NTS) contribute to critical incidents and poor patient outcomes, but it is unclear how to reliably assess them. We developed a standardised NTS assessment rubric and evaluated its reliability. Methods Prospective observational study to evaluate inter-rater reliability of a NTS assessment rubric. Intensive Care Registrars and medical students participated in high-fidelity, immersive, in-situ simulated scenarios of medical emergencies. Following a short period of calibration, two Intensive Care Consultants independently viewed the videoed scenarios and scored each scenario leader using the assessment rubric. The primary outcome was inter-rater reliability of the overall score. Secondary outcomes included inter-rater reliability of the 5 domains and 14 individual questions. Results 40 scenarios were videoed, including 5 for consultant calibration. The mean(SD) score for rater A was 12.7(4.0) vs 13.0(4.8) for rater B; Lin’s concordance coefficient 0.74(95% CI 0.60 to 0.89). Inter-rater agreement for the domains and individual questions was assessed using Cohen’s kappa. Mean kappas for domains varied from 0.36(fair) to 0.64(substantial) and kappas for individual questions varied from 0.15(slight) to 0.75(substantial). Conclusion The NTS assessment rubric demonstrated good correlation between raters in relation to overall score. However, there was variability in agreement for individual questions ranging from slight to substantial. Overall the tool shows promise, but further refinement is required for individual questions. | en_US |
dc.subject | Non-Technical Skills | en_US |
dc.subject | NTS | en_US |
dc.subject | Patient Outcomes | en_US |
dc.subject | Critical Incidents | en_US |
dc.subject | NTS Assessment Rubric | en_US |
dc.subject | Reliability | en_US |
dc.subject | Intensive Care Registrars | en_US |
dc.subject | Medical Students | en_US |
dc.subject | Medical Emergencies | en_US |
dc.subject | Critical Care Clinical Institute, Epworth HealthCare, Victoria, Australia | en_US |
dc.title | Evaluation of a tool to assess non-technical skills in ICU. | en_US |
dc.type | Conference Poster | en_US |
dc.type.studyortrial | Prospective Observational Study | en_US |
dc.description.conferencename | Epworth HealthCare Research Week 2018 | en_US |
dc.description.conferencelocation | Epworth Research Institute, Victoria, Australia | en_US |
dc.type.contenttype | Text | en_US |
Appears in Collections: | Critical Care Research Week |
Files in This Item:
There are no files associated with this item.
Items in Epworth are protected by copyright, with all rights reserved, unless otherwise indicated.