Association between evidence-based training and clinician proficiency in electronic health record use (2024)

  • Journal List
  • J Am Med Inform Assoc
  • v.28(4); 2021 Apr
  • PMC7973447

As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsem*nt of, or agreement with, the contents by NLM or the National Institutes of Health.
Learn more: PMC Disclaimer | PMC Copyright Notice

Association between evidence-based training and clinician proficiency in electronic health record use (1)

AlertsAuthor InstructionsSubmitAboutJAMIA - The Journal of the American Medical Informatics Association

J Am Med Inform Assoc. 2021 Apr; 28(4): 824–831.

Published online 2021 Feb 12. doi:10.1093/jamia/ocaa333

PMCID: PMC7973447

PMID: 33575787

Laura Hollister-Meadows,1 Rachel L Richesson,1 Jennie De Gagne,1 and Neil Rawlins2

Author information Article notes Copyright and License information PMC Disclaimer

Abstract

Objectives

The purpose of the study was to determine if association exists between evidence-based provider training and clinician proficiency in electronic health record (EHR) use and if so, which EHR use metrics and vendor-defined indices exhibited association.

Materials and Methods

We studied ambulatory clinicians’ EHR use data published in the Epic Systems Signal report to assess proficiency between training participants (n = 133) and nonparticipants (n = 14). Data were collected in May 2019 and November 2019 on nonsurgeon clinicians from 6 primary care, 7 urgent care, and 27 specialty care clinics. EHR use training occurred from August 5 to August 15, 2019, prior to EHR upgrade and organizational instance alignment. Analytics performed were descriptive statistics, paired t-tests, multivariate correlations, and hierarchal multiple regression.

Results

For number of appointments per 30-day reporting period, trained clinicians sustained an average increase of 16 appointments (P < .05), whereas nontrained clinicians incurred a decrease of 8 appointments. Only the trained clinician group achieved postevent improvement in the vendor-defined Proficiency score with an effect size characterized as moderate to large (dCohen = 0.625)

Discussion

Controversies exist on the return of investment from formal EHR training for clinician users. Previously published literature has mostly focused on qualitative data indicators of EHR training success. The findings of our EHR use training study identified EHR use metrics and vendor-defined indices with the capacity for translation into productivity and generated revenue measurements.

Keywords: EHR use, user proficiency, clinician training, electronic health records, computer literacy, computer user training, attitude to computers

INTRODUCTION

Background and significance

In response to the need for improved quality, increased efficiency, and reduced healthcare costs, the United States government enacted the 2009 Health Information Technology for Economic and Clinical Health (HITECH) Act, requiring the adoption of electronic health records (EHRs) by clinicians for enhanced reimbursem*nt from the Medicare and Medicaid Electronic Health Record Incentive Program.1 Despite hopes for improved documentation, organization, and coordination of healthcare services, many clinicians’ EHR struggles have left healthcare systems falling short of the promised quality and efficiency outcomes. Many speculate on the culprit of these shortcomings, and a common assertion highlights the strong influence of training on clinicians’ views toward EHRs.2 According to Hamade et al (2019), EHR adoption is an introduction of its use into healthcare practice, whereas EHR use is the secondary step typically following adoption. True EHR use is defined by daily utilization and understanding of EHR features to support the performance of tasks and activities related to clinical practice.3

To promote adoption to use, clinicians must understand how to personalize and optimize the EHR.4 Longhurst et al (2019) discovered physicians who “strongly agreed” that their EHR facilitated high-quality healthcare also reported better training and greater training opportunities, including training aimed at EHR personalization. Other elements cited as critical components of EHR documentation and navigation include integrating EHR with workflow, creating personalized order and note settings, and familiarizing providers with embedded clinical decision support tools.5

Although qualitative data are critical to understanding clinicians’ satisfaction and engagement with EHR systems, efforts to relate this feedback to patient outcomes and productivity can be difficult and imprecise. To date, many studies have utilized clinician narrative of EHR proficiency and EHR experiences to measure and rate training success.6 Previous attempts to objectively measure and assess the impact of EHR training have included measures of clinician activity, such as use of order sets and best practice alert acceptance rates.5,7 However, computerized physician order entry represents only a small percentage of ambulatory clinicians’ daily EHR activity. To determine the association between training and clinicians’ work productivity or efficiency, it is imperative to first identify and measure EHR use metrics most indicative of clinicians’ EHR use competency or proficiency.

OBJECTIVE

We conducted a study to detect and characterize the association between evidence-based training and clinician EHR performance by examining EHR use metrics and noting which were most indicative of proficiency. In our study, we leveraged an organization-wide evidence-based EHR training program aimed to prepare ambulatory clinicians for an EHR upgrade and compared metrics of EHR use between training participants and nonparticipants. The EHR training curriculum and process incorporated the successful strategies previously published in the literature,4–6,8,9 including personalization of user EHR settings, optimization and personalization of efficiency tools, and familiarizing users with changes in the upgrade and their associated workflows. We selected 9 metrics and 2 Epic System constructed indices of EHR use that characterize clinicians’ overall use and compared them among different groups of clinician types and exposure to training.

MATERIALS AND METHODS

Setting and sample

The setting for this study is a community-based health system located in southeastern Washington state. It is composed of a 270-bed hospital facility and 7 primary care, 6 urgent care, and 27 specialty care clinics. Ambulatory clinic payer mix data from 2017 to 2018 reveal 63% are commercial, 34% are Medicare or Medicaid, and 3% are self-pay.

This not-for-profit organization is recently affiliated with a larger healthcare system rendering services in Alaska, California, Montana, New Mexico, Oregon, Texas, and Washington. As such, the EHR upgrade not only included improved Epic System functionalities, it also delivered an instance alignment across organizations. Instance alignment had a compounding effect because it required the merging of patient medical records from 2 different Epic EHR versions. Therefore, in addition to training, digital data migration solutions were implemented to ease users’ duress during the EHR transition.

A total number of 147 clinicians were included in this study, of which 133 were training participants. Studied clinicians were comprised of physician assistants, nurse practitioners, and physicians. Clinicians were included in the study regardless of their full-time employment (FTE) status but must have been employed by the health system from May 2019 through November 2019. Physician surgeons were excluded from the analysis, since a majority of their workload occurs in the hospital setting and their ambulatory services and workload are not comparable to clinicians who are exclusively ambulatory-based.

Clinician training workshops were grouped according to job tasks: ambulatory vs acute and primary care vs specialty care. Each session was conducted over 3 hours and delivered in a designated conference room on the hospital campus. The training session was led by an appointed physician informaticist and assisted by 1 clinical informaticist and 1 provider “super user.” A “super user” is defined as an individual who has an above average Epic proficiency score and has participated in advanced training.

The training curriculum assumed participant familiarity with Epic entry-level content and prioritized new EHR functionalities and knowledge gaps as revealed by the regional functional readiness assessment. An outline of the process is depicted in Figure1.

Open in a separate window

Figure 1.

Components of training curriculum design.

Data

Data were collected through the Signal web portal supported and published by Epic Systems (Verona, WI). Epic Systems created the Signal tool to provide healthcare organizations EHR users’ proficiency and efficiency analytics for ambulatory healthcare services. Measurement components of efficiency include examining clinicians’ time in the EHR in the context of clinician workload. Epic Systems implemented User Action Log (UAL) Lite to determine where providers were actively spending time in the EHR. The UAL Lite approach enabled Epic to track clinicians’ keystrokes and mouse clicks as activity measurements and rids of idle time as a confounding factor within efficiency analytics. EHR data were captured by Hyperspace locally and sent to Epic Connect, a centralized data repository. The data were processed and aggregated by Epic System analysts then published via the Signal web portal.

Data were collected monthly via Signal web portal from April 28, 2019 through November 30, 2019. The go-live date for the EHR instance alignment was August 17, 2019. Training workshops were conducted August 5 through August 15, 2019. In a region-wide email broadcast, regional leaders prohibited paid time off during August 2019. As a result, many clinicians took summer vacation time during June and July 2019. For this reason, May 2019 was utilized for baseline measurements rather than an average measure of May through July. August and September 2019 data were skewed by the instance alignment and reduced schedule loads implemented to ease EHR transition burdens. It was decided that an analysis of training impact could best be gleaned by symmetrically evaluating data 3 months pre- and post-EHR instance alignment event. The use of identifying clinician data was limited to verifying matched pre- and postevent measurements. All analytics performed on EHR use metrics were performed on unidentified data. For this reason informed consent was not obtained. Furthermore, this quality improvement (QI) project earned IRB exemption, as it was formally evaluated using a QI checklist and determined to not be human subjects research.

Measures

Clinician employment demographic measures were derived from the Signal report and included area of practice, ambulatory clinic FTE status, and clinician type. Training participation data was reported based upon an attendance record kept by the training operations team (Table1).

Table 1.

Sum mary statistics

Clinician Characteristics (n = 147)n (%)
Area of practice
 Primary care50 (34.0%)
 Specialty care97 (64.0%)
Ambulatory clinic FTE
 < 0.5 FTE26 (17.7%)
> 0.5 FTE and <0.70 FTE25 (17.0%)
> 0.70 FTE96 (65.3%)
Clinician type
 Physician76 (52.1%)
 Physician assistant16 (11.0%)
 Nurse practitioner/certified nurse midwife55 (36.9%)
Training participation by clinician type
 Physician66 (86.8%)
 Physician assistant13 (81.3%)
 Nurse practitioner/certified nurse midwife54 (98.2%)

Open in a separate window

Abbreviation: FTE, full-time employment.

We selected 9 metrics and 2 Epic System constructed indices to measure clinician EHR use, proficiency, and efficiency. The metrics were chosen from a list of metrics already included in Epic Signal reports. Given successful clinical operations are most often expressed in measures of time or productivity, it was important the EHR use metrics studied be determinants of those outcomes.10 The first, time in system per day, is calculated as the number of minutes logged into the system per 30-day reporting period divided by the number of days a user logged into the system per 30-day reporting period. The number of appointments per 30-day reporting period is as its name suggests, and the time in system per appointment reflects the number of minutes logged into the system per appointment. These 4 metrics were selected to describe overall clinician EHR use and to understand EHR use in the context of appointment volumes. Time spent in system outside of scheduled hours depicts the number of minutes a user logged into the system beyond scheduled hours. Scheduled hours include a 30-minute buffer before the start of the first appointment and after the end of the last appointment. Time spent writing notes per appointment and time spent in In-Basket per appointment represents the time spent by clinicians on the most common daily activities such as visit documentation, prescription refills, messaging to care team members or colleagues about patient treatment plans, reviewing and providing direction on lab or imaging results, and responding to patient inquiries. To identify MA or RN contributions to orders pended, we measured the percent of orders signed by clinicians with team contributions. Percent of office visits closed within same day conveyed efficiency as it relates to the clinician’s ability to extend services, document, and bill all within 24 hours. The 2 indices, PEP score and Proficiency score, were constructed and are maintained by Epic Systems. The PEP score delineates user efficiency by comparing a clinician’s time in the system to the expected time in the system based on workload. Epic’s workload factor is a composite of all Epic users of similar medical specialty and professional designation. The Proficiency scores assess the clinician’s use of efficiency tools built within Epic’s EHR. It is an assessment of the volume of activities completed by a clinician via speed buttons, preference lists, quick action tools, and chart search. Clinicians with higher Proficiency scores demonstrate greater familiarity and comfort with Epic-designed EHR efficiency tools.11

Analytic approach

For this retrospective cohort study, we utilized descriptive and inferential statistical analyses in IBM SPSS Statistics for Macintosh, Version 25.0 to explore the distribution of clinicians among areas of practice, percentage of FTE status, professional designation, and training participation rates.

Paired t-tests were performed for each of the 9 metrics and 2 indices. Measurements were assessed pre- and postinstance alignment event and segregated into participant and nonparticipant cohorts. From this we were able to deduce which metrics or indices yielded significant change and determine corresponding Cohen’s d effect size.

After determining which metrics and indices were most sensitive to the instance alignment, multivariate Pearson correlations were computed for each metric and the 2 indices and FTE status. The data were grouped according to training participation to explore the relationships of use, proficiency, and efficiency under the auspices of training and FTE status.

Hierarchal multiple regression was conducted surveying 3 models of examination: all clinicians, clinicians who did not complete training, and clinicians who completed training. Its analyses aimed to explain to what extent the variance within each use metric was directly, indirectly, or not impacted by training participation. This was achieved by assessing the impact of independent variables on EHR use metrics in a step-wise approach. In the all clinicians model, FTE status and number of appointments were evaluated and controlled for before assessing the impact of training. In the remaining 2 models, impact on EHR use metrics variance was studied first by FTE status and second by number of appointments.

RESULTS

Sample characteristics

Summary statistics for the 147 clinicians involved in this study are published in Table1. The majority of clinicians participated in specialty care (64%) and worked at 0.70 FTE or greater (65.1%). Among clinicians, physicians were the largest subgroup (52.1%), but nurse practitioners had the greatest number of participants in training (98.2%).

Comparison of mean EHR metrics between participants vs nonparticipants of training

Statistically significant changes in mean EHR metric and indices measurements were mostly noted in the training participant group (Table2). Of greatest interest are the changes in number of appointments during the 30-day reporting period. Clinicians who attended the training had an increase of 16 appointments in the average number of appointments per 30 days (P < .05). In contrast, the 14 clinicians who did not attend the training incurred a decrease of 8 appointments in the average number of appointments per 30 days. Furthermore, despite the decrease in average number of appointments per 30 days, nontrained clinicians spent an additional 162 minutes in the system outside of scheduled hours each month, whereas trained clinicians only incurred an average increase of 118 minutes. Another statistically significant improvement observed was the Proficiency score index. This was only noted among training participants and its calculated Cohen’s d effect size was characterized as moderate to large (0.625). The remaining statistically significant EHR use metrics, total time spent in the system during 30-day reporting period and time spent in system per appointment, were contrary in nature.

Table 2.

Paired t-test comparing EHR use metrics and vendor-defined indices between training and nontraining clinician groups

OutcomeAttended TrainingPrePostPaired t-test P valueEffect Size dCohen
Mean (SD)Mean (SD)
Time in system per dayYes155.78 (66.54)160.56 (69.24).1400.07
No128.62 (58.31)128.72 (47.08).9890.002
Time in system per day (numerator)Yes3099.78 (1501.27)3602.44 (1862.35)<.00010.297
No2535.98 (964.69)2884.13 (1015.74).1270.351
Total number of appointments (per 30 days)Yes218.92 (125.44)234.46 (134.27).0160.12
No259.93 (131.38)251.71 (122.26).497−0.065
Time in system per appointmentYes18.52 (12.82)9.87 (13.31)<.0001−0.662
No12.96 (10.28)4.08 (6.30).003−1.042
Time spent in system outside of scheduled hoursYes472.16 (541.77)590.24 (649.37)<.00010.197
No360.90 (354.64)523.77 (391.63).0400.436
Time spent writing notes per appointmentYes6.45 (5.86)7.25 (5.53).0010.14
No3.80 (4.63)4.30 (3.00).5650.128
Time spent in in-basket per appointmentYes1.79 (1.71)1.87 (1.48).2930.05
No1.00 (1.00)1.07 (0.85).4070.075
Percent of orders signed by clinicians with team contributionsYes24.5% (25.6%)24.3% (25.6%).808−0.008
No41.6% (31.6%)41.9% (30.8%).9340.01
Percent of office visits closed within same dayYes78.7% (26.5%)76.4% (27.3%).136−0.085
No77.4% (28.0%)76.6% (28.3%).819−0.028
PEP ScoreYes4.74 (1.81)4.71 (2.11).800−0.015
No5.83 (1.19)5.79 (1.59).906−0.028
Proficiency scoreYes4.29 (1.99)5.53 (1.98)<.00010.625
No3.09 (1.89)4.29 (2.12).0640.598

Open in a separate window

Correlations of significant EHR use metrics

In our paired t-test analytics, we learned the measurements of most significant change among training participants were number of appointments and the Proficiency score. As such, we employed correlation studies (Figure2) to better understand the dynamics of the number of appointments and Proficiency score with the other EHR use metrics and FTE status. November FTE status demonstrated the largest positive correlation (r = 0.626, P < .01), with the number of appointments signaling an increase in hours worked yields a greater volume of appointments; however, its correlation with Proficiency score was small (r = 0.247, P < .01). Two other EHR use metrics, time in system per day (r = 0.388, P < .01) and total time in system during 30-day reporting period (r = 0.430, P < .01), also exhibited positive correlation with the number of appointments metric. Medium correlations in a negative relationship with number of appointments included time in notes per appointment (r = –0.404, P < .01) and time in In-Basket per appointment (r = -0.434, P < .01). Unexpectedly, a medium positive correlation was observed between Proficiency score and time in system per day (r = 0.322, P < .01) and per 30-day reporting period (r = 0.323, P < .01). Lastly, time outside of scheduled hours was revealed to have no correlation with number of appointments and a small positive correlation (r = 0.179, P < .05) with Proficiency score.

Open in a separate window

Figure 2.

Multivariate correlations of EHR use metrics, indices, and FTE status. **P < .01, *P < .05, n/s = Not significant.

Hierarchal multiple regression of FTE, number of appointments, and training participation

To further understand factors influencing increased time spent in the EHR, especially among training participants, we explored the impacts of confounding factors, FTE, appointment volumes, and training participation through a hierarchal multiple regression (Table3). Data analysis indicated training participation could only explain 1.8% of the variance noted among all clinicians in total time spent in the EHR in a 30-day reporting period at a weak β weight of 0.135 (P < .05). Moreover, training participation displayed no influence on time in system per appointment and time in system outside of scheduled hours. Since analytics revealed little to no influence by training on these EHR use metrics, we further explored the variance explained by FTE status and number of appointments within each training participant group. In the total time spent in the system during a 30-day reporting period, 49.2% of the variance noted among trained clinicians was explained by FTE status (β = 0.701, P < .001). Whereas for nonparticipants this EHR use metric variance could not be explained by either FTE or appointment volumes. For nontrained clinicians, 32.7% of the variance observed in total time spent in system per appointment was explained by FTE status (β = –0.572, P < .05). For training participants, time spent in system per appointment was attributed to a negative β weighted number of appointments (β = –0.483, P < .001). Lastly, time spent in system outside of scheduled hours could only be explained for trained clinicians with a majority of the 25.4% variance attributed to FTE status.

Table 3.

Hierarchal multiple regression of full-time employment (FTE), number of appointments, and training participation

Model 1: All CliniciansModel 2: Nontrained CliniciansModel 3: Trained Clinicians
Step1: FTE status and # of appts in 30 daysStep 2: Training participationStep 1: FTEStep 2: # of appts in 30 daysStep 1: FTEStep 2: # of appts in 30 days
Total time in system per 30-day period
 R squared0.4590.4770.0440.1160.4920.492
 Adj. R squared0.4520.466−0.035−0.0450.4880.484
 R squared Changen/a0.018n/a0.071n/a0.000
 F Change61.104.950.560.89126.670.03
 Sig. F Change<0.00010.0280.4700.366<0.00010.855
COEFFICIENTSFTE# of apptsTraining participationFTE# of apptsFTE# of appts
Standard beta coefficient0.6830.0030.1350.2100.3230.7010.015
Coefficient sig.<0.00010.9680.0280.4700.366<0.00010.855
Total time in system per appointment
 R squared0.1300.1450.3270.3270.0000.142
 Adj. R squared0.1180.1270.2710.204−0.0070.129
 R squared changen/a0.015n/a0.000n/a0.142
 F change10.762.495.820.000.0421.52
 Sig. F change<0.00010.1170.0330.9710.845<0.0001
COEFFICIENTSFTE# of apptsTraining participationFTE# of apptsFTE# of appts
Standard beta coefficient0.2830.4550.1220.5720.0110.0170.483
Coefficient sig.0.005<0.00010.1170.0330.9710.845<0.0001
Time spent in system outside of scheduled hours
 R squared0.2540.2550.1050.2650.1970.254
 Adj. R squared0.2430.2390.0310.1310.1910.243
 R squared changen/a0.001n/a0.159n/a0.057
 F change24.490.221.422.3832.099.98
 Sig. F change<0.00010.6430.2570.151<0.00010.002
COEFFICIENTSFTE# of apptsTraining participationFTE# of apptsFTE# of appts
Standard beta coefficient0.6350.3150.0340.3250.4830.4440.307
Coefficient sig.<0.00010.0010.4650.2570.151<0.0010.002

Open in a separate window

DISCUSSION

The merits of EHR training investment is a topic of regular debate among healthcare administrators, informaticists, and caregiver users. To date, most of the published literature asserting the benefits reaped from EHR training has been based upon qualitative data originating from surveys and interviews of participants. The aim of this study was to identify and demonstrate empirical measures from preexisting EHR use reports that could be meaningfully translated into productivity and its associated fiscal advantages.

In our analysis, 2 measurements, number of appointments per 30-day reporting period and Proficiency score, emerged as significant indicators of improved performance among training participants. At 90 days post-EHR upgrade, we noted an average increase of 16 appointments per 30 days in the trained clinician group vs a decline of 8 appointments per 30 days by nontrained clinicians. Patient demand profoundly exceeds supply in the market where this study was performed. It is not uncommon for a patient to wait 60–90 days to establish care with a primary care clinician or 60–180 days to establish care with a specialty care clinician. Moreover, the culture of the studied healthcare system empowers clinicians to dictate ambulatory clinic volumes since compensation is mostly determined by productivity. It is not uncommon for clinicians who struggle with EHR adoption or proficiency to mitigate workload by reducing appointment volumes. An extrapolation of this behavior, set in the context of our findings, suggests trained clinicians’ evolution of proficiency in the EHR upgrades outpaced those of nontrained colleagues and directly contributed to improved capacity for larger patient volumes. Furthermore, in comparing time in system spent outside of scheduled hours, trained clinicians averaged 118 minutes whereas nontrained clinicians averaged 162 minutes despite their decreased appointment volumes.

The Proficiency score is an index constructed by Epic Systems to relay a user’s understanding of and engagement with embedded EHR efficiency tools; therefore, it is not surprising to learn that trained participants outperformed nonparticipants in this measure. We believe the critical discovery here is Proficiency score is a stronger indicator of a user’s ability to adapt to an EHR upgrade when compared to the PEP score. The PEP score portrays user efficiency. The EHR upgrade in our study also involved an instance alignment that heavily relied upon successful data migration. Due to costs, it was not affordable to build digital solutions to migrate all data. As a result, many patient medical record components required manual transcription from the previous instance, hence reducing efficiency. While a user’s efficiency is influenced by proficiency, it can also be influenced by work-arounds, truncated workflows, and necessary re-work.

Our correlation analysis revealed a variety of findings with some contradictory in nature. November FTE status of a user demonstrated a strong positive correlation (0.626) with number of appointments per 30-day reporting period, which supports the intuitive assumption a clinician with more scheduled hours each week will incur higher patient volumes. However, it is important to note the correlation between FTE status and Proficiency score was small (0.247), emphasizing that EHR proficiency may be a significant outcome of education rather than hours of use.

As it relates to number of appointments, a medium positive correlation was noted with time in system per day (0.388) and total time in system per 30-day reporting period (0.430) and is expected since clinicians seeing more patients will obviously spend more time in the system. A more revealing discovery was the negative correlation between the number of appointments and time spent in system per appointment. While it would be convenient for us to assert this improved efficiency was related to training, this is not supported by the small correlation (0.200) demonstrated between Proficiency score and number of appointments in training participants. Rather, we assert the decrease in time spent in notes per appointment reflects increased workload without a corresponding increase in dedicated charting time, and the decrease in In-Basket per appointment highlights that clinicians are completing these activities during an office visit. We believe these propositions are further supported by the lack of correlation between Proficiency score and time spent in system per appointment within the trained clinician group.

The last noteworthy correlation to elucidate includes time in system outside of scheduled hours and Proficiency score. A small positive correlation (0.179) was observed, and we postulate that users with a higher Proficiency score as influenced by training may be investing time outside of scheduled hours to customize and organize embedded EHR efficiency tools. This supposition is further supported by the lack of correlation between number of appointments and time in system outside of scheduled hours. Trained clinicians experienced the greatest and most significant changes in number of appointments and Proficiency score, yet the time in system outside of scheduled hours appeared to only be correlated with improved familiarity with available Epic efficiency tools, not increased patient volumes.

We would be remiss in not exploring the 3 metrics demonstrating the greatest statistically significant change in our initial paired t-test analysis: total time in system per 30-day reporting period, time in system per appointment, and time spent in system outside of scheduled hours. By performing hierarchal multiple regression at first for the combined trained and nontrained clinician groups, we were able to determine the percentage of variance explained by training was a mere 1.8% and existed for total time in system per 30-day reporting period only (β = 0.135, P < .05). As a result, we redirected our focus to better understand how the confounding factors, FTE status, and number of appointments influenced the differences between the 2 clinician groups. For nontrained clinicians, FTE status and number of appointments could not explain the variance for total time in system per 30-day reporting period or for time spent in system outside of scheduled hours. In fact, only FTE status could explain 32.7% of the variance observed within time in system per appointment. This would suggest differences in nontrained clinicians’ time in system per appointment were best explained by the clinician’s number of scheduled hours per week rather than by the clinician’s appointment volumes. Similarly, 49.2% of the variance noted for trained clinicians’ total time in system per 30-day reporting period was explained solely by FTE status. Therefore, we propose the total time in system per 30-day reporting period reflected the trained clinician’s numbers of hours scheduled each week and not training participation or patient volumes. Only 14.2% of variance for time in system per appointment could be explained for trained clinicians. It was exclusively attributed to the volume of patient appointments with a negative beta weight of −0.483 (P < .01) but, given the low percentage of variance explained, we rendered this finding inconclusive. For time spent in system outside scheduled hours, a total of 25.4% of the variance could be explained by FTE status and number of appointments. Of the 25.4%, 19.7% was attributed to FTE status and the remaining 5.7% was explained by number of appointments. Interestingly, the beta weight observed for FTE status was positive (β = 0.444, P < .01); but for number of appointments, it was negative (β = −0.307, P < .01). We believe this indicates trained clinicians with a greater FTE status were able to extend a larger volume of patient appointments and thus reduce the amount of work performed in a nonreimbursable format. In conclusion, hierarchal multiple regression enabled discovery of the heavy influence of FTE status upon these EHR use metrics. As such, we believe this disqualifies them as reliable indicators of clinician EHR proficiency and EHR efficiency.

Several limitations exist within our study. Our sample size was small and, because training participation was a little over 90%, the number of nonparticipants was only 14 clinicians. As a result, we were unable to ascertain how the lack of training impacted EHR use metrics. Moreover, though the majority of nonparticipants were for extenuating circ*mstances such as long-planned vacations and family events, training participation was not mandatory so clinician participation was self-determined. This introduced participation bias; therefore, inference of causal effects among nonparticipants should be conducted cautiously. The data were limited to a single community institution, and the impact of faulty or impaired data migration may also have caused confounding bias. Lastly, 1 of the study’s indicators of proficiency, the Proficiency score index, is a proprietary construct of Epic Systems, so finite elements of the calculation were not disclosed. However, because Epic EHR software is engaged by many US healthcare systems, we believe the findings are relatable.

CONCLUSION

Our study uses empirical evidence to identify and demonstrate the EHR use metrics and indices describing proficiency for clinicians in an ambulatory setting. An extrapolation of these EHR use metrics can assist healthcare administrators in forecasting differences in relative value units values and, thus, approximate reimbursable revenue generated. The analysis of this study largely focused on rudimentary measurements common in most EHR use reports. Moreover, the data captured do not require normalization efforts if applied to matched groups. We believe the findings of this study will aid clinicians, informaticists, and administrators in determining return on investment estimates of EHR clinician training programs.

FUNDING

None.

AUTHOR CONTRIBUTIONS

L. Hollister-Meadows and R. Richesson contributed to the design of the project. L. Hollister-Meadows performed the statistical analysis and data analytics and wrote the manuscript in consultation with R. Richesson and J. De Gagne. N. Rawlins supervised the project.

ACKNOWLEDGMENTS

The authors wish to thank Julie Thompson for her statistical review and Elena Turner for proofreading.

CONFLICT OF INTEREST STATEMENT

None declared.

References

1. What is the HITECH Act. HIPAA Journal 2019 March 24. www.hipaajournal.com/what-is-the-hitech-actAccessed 24 June 2020

2. O’Donnell A, Kaner E, Shaw C, et al. Primary care physicians’ attitudes to the adoption of electronic medical records: a systematic review and evidence synthesis using the clinical adoption framework. BMC Med Inform Decis Mak 2018; 18 (1): 1–16. [PMC free article] [PubMed] [Google Scholar]

3. Hamade N, Terry A, Malvankar-Mehta M. Interventions to improve the use of EMRs in primary health care: a systematic review and meta-analysis. BMJ Health Care Inform 2019; 26 (1): e000023-10. [PMC free article] [PubMed] [Google Scholar]

4. Longhurst CA, Davis T, Maneker A, et al.; on behalf of the Arch Collaborative. Local investment in training drives electronic health record user satisfaction. Appl Clin Inform 2019; 10 (02): 331–5. [PMC free article] [PubMed] [Google Scholar]

5. Robinson KE, Kersey JA. Novel electronic health record (EHR) education intervention in large healthcare organization improves quality, efficiency, time, and impact on burnout. Medicine 2018; 97 (38): 1–5. [PMC free article] [PubMed] [Google Scholar]

6. Goveia J, Van Stiphout F, Cheung Z, et al. Education intervention to improve the meaningful use of Electronic Health Records: a review of the literature: BEME Guide No. 29. Med Teacher 2013; 35 (11): e1551–60. [PubMed] [Google Scholar]

7. Ancker JS, Kern LM, Edwards A, et al.; with the HITEC Investigators. Associations between healthcare quality and use of electronic health record functions in ambulatory care. J Am Med Inform Assoc 2015; 22 (4): 864–71. [PubMed] [Google Scholar]

8. He Z, Marquard J, Henneman E. Model guided design and development process for an electronic health record training program. AMIA Annu Symp Proc 2016; 2016: 1814–21. [PMC free article] [PubMed] [Google Scholar]

9. Pantaleoni JL, Stevens LA, Mailes ES, et al. Successful physician training program for large scale EMR implementation. Appl Clin Inform 2015; 06 (01): 80–95. [PMC free article] [PubMed] [Google Scholar]

10. Sinsky CA, Rule A, Cohen G, et al. Metrics for assessing physician activity using electronic health record log data. J Am Med Inform Assoc 2020; 27 (4): 639–43. [PMC free article] [PubMed] [Google Scholar]

11. Epic Systems. Metric Descriptions. Signal – Efficiency Portal. https://signal.epic.com/Documentation/MetricReferenceAccessed June 24, 2020

Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of Oxford University Press

Association between evidence-based training and clinician proficiency in electronic health record use (2024)
Top Articles
Latest Posts
Article information

Author: Duncan Muller

Last Updated:

Views: 6708

Rating: 4.9 / 5 (59 voted)

Reviews: 90% of readers found this page helpful

Author information

Name: Duncan Muller

Birthday: 1997-01-13

Address: Apt. 505 914 Phillip Crossroad, O'Konborough, NV 62411

Phone: +8555305800947

Job: Construction Agent

Hobby: Shopping, Table tennis, Snowboarding, Rafting, Motor sports, Homebrewing, Taxidermy

Introduction: My name is Duncan Muller, I am a enchanting, good, gentle, modern, tasty, nice, elegant person who loves writing and wants to share my knowledge and understanding with you.