BACKGROUND: Timely documentation of care preferences is an endorsed quality indicator for seriously ill patients admitted to intensive care units. Clinicians document their conversations about these preferences as unstructured free text in clinical notes from electronic health records.
AIM: To apply deep learning algorithms for automated identification of serious illness conversations documented in physician notes during intensive care unit admissions.
DESIGN: Using a retrospective dataset of physician notes, clinicians annotated all text documenting patient care preferences (goals of care or code status limitations), communication with family, and full code status. Clinician-coded text was used to train algorithms to identify documentation and to validate algorithms. The validated algorithms were deployed to assess the percentage of intensive care unit admissions of patients aged >=75 that had care preferences documented within the first 48 h.
SETTING/PARTICIPANTS: Patients admitted to one of five intensive care units.
RESULTS: Algorithm performance was calculated by comparing machine-identified documentation to clinician-coded documentation. For detecting care preference documentation at the note level, the algorithm had F1-score of 0.92 (95% confidence interval, 0.89 to 0.95), sensitivity of 93.5% (95% confidence interval, 90.0% to 98.0%), and specificity of 91.0% (95% confidence interval, 86.4% to 95.3%). Applied to 1350 admissions of patients aged >=75, we found that 64.7% of patient intensive care unit admissions had care preferences documented within the first 48 h.
CONCLUSION: Deep learning algorithms identified patient care preference documentation with sensitivity and specificity approaching that of clinicians and computed in a tiny fraction of time. Future research should determine the generalizability of these methods in multiple healthcare systems.