Diagnosing our documentation: a novel electronic peer feedback program to improve the quality of hospitalists' notes at a large tertiary care medical center [abstract] Abstract uri icon
Overview
abstract
  • Background: The advent of the Electronic Health Record (EHR) has changed the face of medical documentation. Illegibility and absence of data have all but disappeared, and EHRs can foster thoughtful assessments by providing a platform to craft differential diagnoses. However, EHRs have also introduced features like “copy and paste” and “blow in” templates that can compromise documentation quality. Considering the central role of EHRs in modern health care, there is a growing movement to define the characteristics of quality electronic documentation and encourage users to adopt these features. Few documentation quality improvement projects have been published to date. One group validated a documentation evaluation tool, the Physician Documentation Quality Instrument 9 (PDQI-9). Published studies utilizing this tool have been limited to residents and outpatient providers. Purpose: The aim of our project was to improve the quality of inpatient progress notes written by hospitalists at a large tertiary care medical center through a novel structured electronic peer-evaluation system. Description: Thirty-nine hospitalists were anonymously assigned to evaluate one note from three different colleagues in phase one of the project. Participants used the PDQI-9 tool to produce a numerical score in each of the nine categories and were also encouraged to provide free-text commentary (Figure 1). In return, participants received feedback on their own progress notes. In the second phase, which is currently underway, the evaluation process is repeated. After the completion of the project participants will complete a post-project survey to assess perceptions of their own note quality and the project as a whole. The mean overall score for participants for the first phase was 40.38 (scale 0-45), with scores ranging from 24-45. Evaluations from the second phase will be compared to scores from phase one. It is expected that after receiving and incorporating feedback, participants will score higher. Free-text responses from the first phase commonly included discussion of diagnostic reasoning, note clarity/readability, avoidance of redundancy, and recommendations for inclusion or exclusion of items related to the primary problem. Conclusions: Existing mechanisms for providers to give and receive feedback about their documentation are limited. Establishing programs such as our novel feedback system are critical to improving note quality, which is expected to in turn lead to improved quality of care and diagnostic performance. We anticipate that implementation of our structured peer-evaluation program will help foster a culture of high quality documentation among a large hospitalist group.

  • publication date
  • 2017
  • published in
    Research
    keywords
  • Education, Medical
  • Hospitalization
  • Medical Records Systems, Computerized
  • Quality of Health Care
  • Additional Document Info
    volume
  • 12
  • issue
  • Suppl 2