If you don't remember your password, you can reset it by entering your email address and clicking the Reset Password button. You will then receive an email that contains a secure link for resetting your password
If the address matches a valid account an email will be sent to __email__ with instructions for resetting your password
Overall referring clinician satisfaction is most closely associated with reports overall.
•
Satisfaction with attending radiologist interactions and the section they work most closely are also strongly associated.
•
We recommend use of a prioritization chart to visually identify the highest priority areas to target for improvement.
Abstract
Purpose
We sought to identify which aspects of the referring clinician experience are most strongly correlated with overall satisfaction, and hence of greatest relevant importance to referring clinicians.
Methods
A survey instrument assessing referring clinician satisfaction throughout 11 domains of the radiology process map was distributed 2720 clinicians. The survey contained sections assessing each process map domain, with each section including a question about satisfaction overall in that domain and multiple more granular questions. The final question on the survey was overall satisfaction with the department. Univariate logistic regression and multivariate logistic regression were performed to assess the association between individual survey questions and overall satisfaction with the department.
Results
729 referring clinicians (27%) completed the survey. Using univariate logistic regression nearly every question was associated with overall satisfaction. Amongst the 11 domains of the radiology process map multivariate logistic regression identified the following as mostly strongly associated with overall satisfaction: results/reporting overall (odds ratio 4.71; 95% confidence interval 2.15–10.23), section with which work most closely overall (3.39; 1.28–8.64), and inpatient radiology overall (2.39; 1.08–5.08). Other survey questions associated with overall satisfaction on multivariate logistic regression were attending radiologist interactions (odds ratio 3.71; 95% confidence interval 1.54–8.69), timeliness of inpatient radiology results (2.91; 1.01–8.09), technologist interactions (2.15; 0.99–4.40), appointment availability for urgent outpatient studies (2.01; 1.08–3.64), and guidance for selecting correct imaging study (1.88; 1.04–3.34).
Conclusion
Referring clinicians value most the accuracy of the radiology report and their interactions with attending radiologists, particularly within the section they work most closely.
Referring clinician surveys are a valuable management tool to learn how a radiology practice is functioning from the perspective of a key constituency, the practitioners who refer patients for imaging and interventions. Results from these surveys are useful for identifying and prioritizing improvement efforts, as well as tracking performance and improvement over time.
The step of prioritization is complex and requires weighing possible improvement projects on multiple factors, including baseline performance of the area to be improved, project feasibility, and how the area to be improved affects clinicians' and patients' overall experience. Some of these factors are easier to assess than others. For example, project feasibility and resources needed are determined based upon how a project matches the operational capacity of an individual department. Baseline performance is measured by either operational data or numerical ratings from surveys. However, how an area impacts clinicians' and patients' overall experience is not as straightforward to determine.
Understanding how each part of the radiology process map impacts clinicians' overall satisfaction is important because referring clinicians may score a particular part of the radiology workflow as low performing, but their overall satisfaction with radiology is more dependent on other areas of the radiology workflow. Directing resources to improve low performing areas that are not important to clinicians' overall satisfaction may not be as constructive as addressing challenges in areas that could be high performing but more important for clinician satisfaction.
We set out to better understand what parts of the radiology process map are most important to referring clinicians by assessing which survey questions are most closely associated with referring clinician overall satisfaction.
2. Methods
The project is exempt from HIPAA, and as part of the quality improvement operations of the Department of Radiology is excluded from IRB review. The Department of Radiology employs in excess of 100 faculty members who collectively provide services to a 980-bed urban academic medical center and the associated outpatient clinical sites for the university's health system. It conducts an annual referring clinician satisfaction survey with a homegrown survey instrument (Appendix 1). The survey was developed in consultation with members of the department's leadership team and department's Quality and Safety Committee, and was vetted by a focus group of referring clinicians to ensure clarity of questions and completeness of the survey's scope. Surveys from other institutions' radiology departments, including New York University, Indiana University, and Stanford University, were also reviewed during survey drafting.
The survey is designed around the radiology process map, and consists of questions within 9 blocks: order entry, outpatient scheduling, prior authorization, inpatient imaging workflow, emergency department imaging workflow, image acquisition, radiology results and reporting, personnel interactions, and outside imaging. Additionally, the survey also has question blocks pertaining to body interventional radiology and neuro-interventional radiology. As well, respondents are asked with which diagnostic radiology section they refer to most, and then presented with a question block pertaining to that section. Each block of questions consists of between two and seven specific questions pertaining to the subject of the question block, followed by a more general prompt asking the respondent to rate their satisfaction with the area overall. The one exception is the personnel interactions question block, which consists of specific questions only and lacks a final question about satisfaction overall with personnel interactions. All of these questions are rated on the following five-point Likert scale: extremely dissatisfied, somewhat dissatisfied, neither satisfied nor dissatisfied, somewhat satisfied, and extremely satisfied.
At the end of the survey respondents are asked to rate their satisfaction with the department overall on a seven-point Likert scale (extremely dissatisfied, moderately dissatisfied, slightly dissatisfied, neither satisfied nor dissatisfied, and then corresponding satisfied options) and to answer a Net Promoter Score (NPS) question of “how likely are you to recommend the Department of Radiology to a friend or family member”, which is graded according to standard NPS methodology.
The survey was distributed in October 2020 to 2720 referring clinicians with valid e-mail addresses, with 729 responding (27.4% response rate). All clinicians at our institution who order at least 10 radiology studies in a 1-year period leading up to survey distribution were included. Clinicians who lacked a valid e-mail address in our electronic medical record database (n = 185) were excluded.
Distribution was via an e-mail that contained a unique link to the survey in Qualtrics XM Platform (Provo, UT). The survey remained active for three weeks, during which time three reminder e-mails were sent to clinicians who had yet to complete the survey.
Clinician credentials and specialty, and number of studies ordered over a one-year period preceding distribution were determined from the electronic medical record (Epic Systems, Verona, WI). Respondents answered questions about whether they were a trainee and whether they care for ED/inpatients primarily, outpatients primarily, or both. Clinicians were instructed to self-designate as caring for both types of patients if they spent at least 10% of their clinical time caring for each. The time it took for each respondent to complete the survey was recorded by the survey platform.
The respondents' characteristics were summarized by frequency and percentage. To assess the associations between individual survey questions and overall satisfaction, univariable logistic regression was performed for each question. For predictor variables, responses were dichotomized as two categories: extremely satisfied and somewhat satisfied of the 5-point Likert scale versus neither satisfied nor dissatisfied, somewhat dissatisfied, and extremely dissatisfied. The 7-point Likert scale for the overall satisfaction question was dichotomized into two categories: extremely satisfied and moderately satisfied responses, versus the other responses. We further used multivariable logistic regression to assess for other covariates, including number of studies ordered, clinician rank (attending physician, trainee physician, midlevel provider), practice setting (primarily inpatient, outpatient, or mixed), and specialty. Furthermore, a multivariable logistic analysis was performed by including all the overall questions.
Data analysis was performed in Microsoft Excel (Redmond, WA) and R (version 4.0.5). Statistical significance was declared based on p-value < 0.05. No multiple testing adjustment was performed.
3. Results
The department received 804 responses after distributing 2930 surveys to referring clinicians (28% response rate). Of these, 729 respondents finished the survey and were included in the analysis (91% completion rate). The median time to complete the survey was 7 min 24 s.
Respondents represent a range of specialties, practice settings, and practitioner type (Table 1). The most common specialty was internal medicine, family medicine, or one of their respective subspecialties, which represented 51% of respondents, while 23% of respondents were from a surgical field, 15% from pediatrics or a pediatric subspecialty, and 5% from emergency medicine. A plurality of respondents practice primarily in the outpatient setting (47%), while 35% of respondents practice primarily in the emergency department or inpatient settings, and 17% spend at least 10% of their clinical time caring for patients in both outpatient and ED/inpatient settings. Finally, attending physicians were the majority of respondents (56%), while 20% were physician trainees (residents and fellows) and 24% were midlevel providers.
Table 1Survey response rates overall and stratified by survey version and clinician characteristics.
Using univariate analysis, nearly every survey question was associated with overall satisfaction, as reported in Table 2. The top five survey questions that were most predictive of overall satisfaction were “direct interactions (e-mail, phone, in-person)” with attending radiologists (odds ratio 11.28; 95% confidence interval 5.59–22.93), “interventions/procedures performed” of the radiology section that the respondent works most closely with (9.44; 3.90–22.92), neurointerventional radiology overall (8.71; 2.37–32.05), radiology results/reporting (7.63; 4.23–13.67), and section with which work most closely (6.96; 3.20–14.83).
Table 2Univariable and multivariable adjusted odds ratio for overall satisfaction score. Bold indicates p < 0.05.
Question category
Question stem
Univariable odds ratio (95% CI, p-value)
Multivariable odds ratio (95% CI, p-value)
Order entry
Guidance for selecting correct imaging study
3.26 (1.97–5.37, p<0.001)
1.88 (1.04–3.34, p=0.033)
Ease of finding the correct order in EMR
2.07 (1.27–3.38, p=0.003)
1.13 (0.64–1.99, p = 0.667)
Ease of completing order in EMR
2.83 (1.72–4.64, p<0.001)
1.69 (0.94–2.97, p = 0.074)
Order entry overall
2.54 (1.55–4.15, p<0.001)
1.19 (0.61–2.25, p = 0.592)
Outpatient scheduling
Ease of scheduling routine (non-urgent) studies
2.58 (1.49–4.36, p<0.001)
1.76 (0.95–3.16, p = 0.065)
Appointment availability for routine (non-urgent studies)
2.89 (1.68–4.87, p<0.001)
1.70 (0.90–3.08, p = 0.090)
Appointment availability for urgent studies
3.11 (1.79–5.30, p<0.001)
2.01 (1.08–3.64, p=0.025)
Convenience of imaging facility locations for your patients
1.77 (0.93–3.21, p = 0.068)
0.99 (0.47–1.95, p = 0.982)
Outpatient scheduling overall
2.62 (1.50–4.46, p<0.001)
1.57 (0.75–3.22, p = 0.224)
Prior authorizations
Timeliness of prior authorizations
1.83 (1.09–3.02, p=0.020)
1.16 (0.64–2.04, p = 0.617)
Explanations received for prior authorization denials
1.99 (1.18–3.29, p=0.008)
1.36 (0.76–2.40, p = 0.291)
Prior authorizations overall
1.86 (1.11–3.07, p=0.017)
1.04 (0.52–2.03, p = 0.913)
Inpatient radiology
Speed to obtain emergent examinations
2.54 (1.19–5.05, p=0.011)
0.74 (0.23–2.18, p = 0.608)
Speed to obtain non-emergent examinations
2.29 (1.17–4.26, p=0.012)
1.25 (0.50–2.86, p = 0.614)
Communication and coordination between the radiology staff and inpatient care team
3.65 (1.96–6.61, p<0.001)
1.80 (0.66–4.61, p = 0.232)
Timeliness of results
5.69 (2.86–11.04, p<0.001)
2.91 (1.01–8.09, p=0.043)
Inpatient diagnostic radiology overall
4.32 (2.29–7.91, p<0.001)
2.39 (1.08–5.08, p=0.027)
ED radiology
Speed to obtain emergent examinations
2.67 (0.85–7.06, p = 0.063)
1.41 (0.37–4.56, p = 0.585)
Speed to obtain non-emergent examinations
2.73 (0.97–6.70, p=0.039)
0.78 (0.21–2.42, p = 0.679)
Communication and coordination between the radiology staff and ED care team
4.77 (1.74–12.06, p=0.001)
1.97 (0.57–6.43, p = 0.268)
Timeliness of results
2.77 (1.06–6.44, p=0.024)
1.19 (0.38–3.31, p = 0.754)
ED diagnostic radiology overall
2.73 (0.97–6.70, p=0.039)
1.06 (0.30–3.24, p = 0.925)
Image acquisition
Image quality
4.33 (2.13–8.45, p<0.001)
1.14 (0.45–2.66, p = 0.779)
Study performed as requested
4.69 (2.17–9.69, p<0.001)
1.42 (0.52–3.55, p = 0.473)
Image acquisition overall
4.50 (2.09–9.23, p<0.001)
0.92 (0.32–2.41, p = 0.872)
Radiologist interpretation and reporting
Outpatient results available in a timely manner
5.42 (1.60–16.70, p=0.004)
1.31 (0.31–4.99, p = 0.704)
Clarity of radiology reports
5.77 (3.00–10.84, p<0.001)
1.43 (0.56–3.49, p = 0.448)
Accuracy of radiology reports
3.06 (1.53–5.79, p=0.001)
0.37 (0.13–1.00, p = 0.060)
Receiving results via phone
3.16 (1.82–5.38, p<0.001)
1.26 (0.63–2.42, p = 0.505)
Results communicated by Relay Center via email/fax
2.10 (1.19–3.61, p=0.008)
0.89 (0.44–1.72, p = 0.741)
Radiology results/reports overall
7.63 (4.23–13.64, p<0.001)
4.71 (2.15–10.23, p<0.001)
Direct interactions with personnel (e-mail, phone, in person)
Attending radiologists
11.28 (5.59–22.93, p<0.001)
3.71 (1.54–8.69, p=0.003)
Radiology residents/fellows
4.28 (2.15–8.19, p<0.001)
1.26 (0.51–2.90, p = 0.601)
Schedulers
2.24 (1.25–3.87, p=0.005)
0.94 (0.45–1.86, p = 0.867)
Prior authorization staff
1.76 (0.95–3.10, p = 0.060)
0.77 (0.37–1.51, p = 0.464)
Front desk staff
2.29 (1.17–4.26, p=0.012)
0.81 (0.33–1.79, p = 0.620)
Technologists
4.11 (2.15–7.60, p<0.001)
2.15 (0.99–4.40, p=0.043)
Image library staff
1.88 (0.96–3.46, p = 0.052)
0.74 (0.32–1.56, p = 0.451)
Outside imaging studies
Ease of submitting outside studies for loading into PACS
1.79 (1.07–2.96, p=0.024)
1.09 (0.59–1.92, p = 0.783)
Speed of loading outside studies into PACS
1.54 (0.92–2.53, p = 0.097)
0.96 (0.53–1.67, p = 0.878)
Helpfulness of image library staff
2.15 (1.19–3.74, p = 0.009)
1.14 (0.57–2.17, p = 0.698)
Process for requesting secondary interpretation of outside studies
1.47 (0.87–2.44, p = 0.141)
0.73 (0.38–1.33, p = 0.318)
Outside imaging processes overall
1.64 (0.98–2.70, p = 0.054)
0.73 (0.38–1.37, p = 0.344)
Radiology section with which work most closely
Ease of reaching a radiologist to discuss patient care
4.57 (2.18–9.17, p < 0.001)
1.45 (0.53–3.66, p = 0.450)
Quality of consultations via phone/zoom/e-mail/in person
7.14 (3.17–15.72, p < 0.001)
2.28 (0.75–6.57, p = 0.134)
Quality of interpretations
6.71 (3.00–14.60, p < 0.001)
1.02 (0.30–3.24, p = 0.974)
Interventions/procedures performed
9.44 (3.90–22.92, p < 0.001)
2.54 (0.77–7.90, p = 0.113)
Quality of conference support
5.14 (2.36–10.73, p < 0.001)
2.31 (0.78–6.13, p = 0.109)
Satisfaction with section overall
6.96 (3.20–14.83, p < 0.001)
3.39 (1.28–8.64, p = 0.012)
Body interventional radiology
Ease of requesting interventions
2.21 (1.00–4.48, p = 0.035)
1.10 (0.44–2.49, p = 0.828)
Ease of scheduling outpatient interventions
1.37 (0.54–2.98, p = 0.464)
1.05 (0.37–2.53, p = 0.923)
Speed to accommodate inpatients for emergent interventions
2.41 (1.13–4.77, p = 0.016)
1.58 (0.64–3.52, p = 0.290)
Speed to accommodate inpatients for non-emergent interventions
1.76 (0.86–3.35, p = 0.099)
0.97 (0.41–2.07, p = 0.936)
Satisfaction with patient outcomes
5.11 (2.09–11.81, p < 0.001)
2.98 (0.93–8.73, p = 0.054)
Communication with radiologist performing the intervention
2.15 (1.05–4.14, p = 0.028)
1.26 (0.54–2.73, p = 0.574)
Communication and coordination between the radiology staff and care team
2.06 (1.00–3.96, p = 0.037)
1.25 (0.54–2.70, p = 0.581)
Body interventional radiology overall
2.47 (1.12–5.05, p = 0.017)
1.11 (0.40–2.74, p = 0.830)
Neuro interventional radiology
Ease of requesting interventions
4.42 (1.63–11.02, p = 0.002)
1.77 (0.54–5.21, p = 0.319)
Ease of scheduling outpatient interventions
1.38 (0.32–4.21, p = 0.612)
0.88 (0.17–3.09, p = 0.858)
Speed to accommodate inpatients for emergent interventions
4.27 (1.12–13.93, p = 0.020)
1.97 (0.37–8.28, p = 0.386)
Speed to accommodate inpatients for non-emergent interventions
2.51 (0.81–6.58, p = 0.079)
1.13 (0.29–3.59, p = 0.846)
Satisfaction with patient outcomes
5.71 (1.43–20.49, p = 0.008)
2.31 (0.40–11.36, p = 0.318)
Communication with radiologist performing the intervention
1.87 (0.53–5.17, p = 0.271)
0.68 (0.15–2.40, p = 0.574)
Communication and coordination between the radiology staff and care team
The prompts next most closely associated with overall satisfaction were quality of interpretations of section work most closely with (6.71; 3.00–14.60), clarity of final radiology reports (5.77; 3.00–10.84), timeliness of inpatient results (5.69; 2.86–11.04), timeliness of outpatient results (5.42; 1.60–16.70), and satisfaction with patient outcomes for neuro IR (5.71; 1.43–20.49) and satisfaction with patient outcomes for body IR (5.11; 2.09–11.81) (Table 2).
To account for the possible confounders, we first checked if the number of studies ordered, clinician rank (attending physician, trainee physician, midlevel provider), practice setting (primarily inpatient, outpatient, or mixed), and specialty (as reported in Table 1) were associated with the overall satisfaction by univariable logistic regression and found only clinician rank was significantly associated with the overall satisfaction which was later included in the multivariable logistic regression. Relative to physician attendings, physician trainees had 3.40 odds (1.45–9.97 95% confidence interval, p = 0.011) and midlevel providers had 0.80 odds ratio (0.47–1.40, p = 0.426).
Due to the large numbers of questions associated with overall satisfaction, we performed a multivariate regression analysis, to determine which of the “satisfaction overall” prompts in each survey section (i.e. the final question in each question block) are independently associated with “overall satisfaction” with the entirety of the radiology department (results reported in Table 3). This identified three of the final prompts from the survey sections that are independently associated with overall satisfaction: results/reporting overall (odds ratio 4.71; 95% confidence interval 2.15–10.23), section with which work most closely overall (3.39; 1.28–8.64), and inpatient radiology overall (2.39; 1.08–5.08). Adjusting for the responses to these three “overall questions”, the following five questions persisted in their association with overall satisfaction: attending radiologist interactions (odds ratio 3.71; 95% confidence interval 1.54–8.69), timeliness of inpatient radiology results (2.91; 1.01–8.09), technologist interactions (2.15; 0.99–4.40), appointment availability for urgent outpatient studies (2.01; 1.08–3.64), and guidance for selecting correct imaging study (1.88; 1.04–3.34).
Table 3Univariable and multivariable adjusted odds ratio for overall satisfaction score. Bold indicates p < 0.05.
Overall question stem
Univariable odds ratio (95% CI, p-value)
Multivariable odds ratio (95%, p-value)
Order entry overall
2.54 (1.55–4.15, p<0.001)
1.19 (0.61–2.25, p = 0.592)
Outpatient scheduling overall
2.62 (1.50–4.46, p<0.001)
1.57 (0.75–3.22, p = 0.224)
Prior authorizations overall
1.86 (1.11–3.07, p=0.017)
1.04 (0.52–2.03, p = 0.913)
Inpatient diagnostic radiology overall
4.32 (2.29–7.91, p<0.001)
2.39 (1.08–5.08, p=0.027)
ED diagnostic radiology overall
2.73 (0.97–6.70, p=0.039)
1.06 (0.30–3.24, p = 0.925)
Image acquisition overall
4.50 (2.09–9.23, p<0.001)
0.92 (0.32–2.41, p = 0.872)
Radiology results/reports overall
7.63 (4.23–13.64, p<0.001)
4.71 (2.15–10.23, p<0.001)
Outside imaging processes overall
1.64 (0.98–2.70, p = 0.054)
0.73 (0.38–1.37, p = 0.344)
Satisfaction with section work most closely with overall
The survey questions most strongly associated with referring clinician overall satisfaction include question prompts that have to do with various aspects of reporting (results/reports overall, timeliness of inpatient radiology results), service (attending radiologists interactions, technologist interactions, and section with which work most closely overall), and making interfacing with radiology as frictionless as possible (inpatient radiology overall, appointment availability for urgent outpatient studies, and guidance for selecting correct imaging study).
While these were the questions that were associated with overall satisfaction on multivariate analysis, nearly all survey questions were associated with overall satisfaction on univariate analysis. This indicates that many survey questions are correlated. One explanation is that people who are generally more satisfied with radiology services are predisposed to answer any question on the survey more favorably, or to select the same answer over and over again as they try to expeditiously complete the survey. Another explanation is that individual survey questions often assess related aspects of the imaging process map, and the factors that contribute to clinicians being more/less satisfied with one item also relate to satisfaction with other question items. These hypotheses could be tested by evaluating for correlation between responses to sets of survey questions.
The main use of the Radiology Referring Clinician Survey is to help prioritize improvement projects. Understanding the association between individual survey questions and clinician overall satisfaction forms one dimension upon which our department prioritizes improvement projects. In addition to considering which questions are most strongly associated with overall satisfaction, we also weigh the average response score for each question and the feasibility of any proposed interventions. If a question has a high correlation with overall satisfaction but is already high scoring, there is little room to further improve performance. Similarly, a low scoring question that has little correlation with overall satisfaction indicates that improvement efforts in this domain may have limited impact on the overall referring clinician experience.
In order to visually identify the higher priority areas for improvement, we display the survey questions on a scatterplot that shows each question's average score on the y-axis and the odds ratio for association with overall satisfaction on the x-axis (Fig. 1). Data points that are further to the right (higher odds ratio) and lower down (low average score) are areas in which improvement efforts have greater potential to impact clinicians' overall satisfaction. The feasibility of any interventions, including considerations for cost, human resources required, and political viability also needs to be taken into account. This feasibility assessment is performed separately after the department has identified high priority areas for improvement projects. Some projects that have come out of this analysis include operationalizing patient self-scheduling (scheduling & access), improving inpatient MRI access (inpatient radiology), developing ordering guides (order entry), and revamping prior authorization software (prior authorization).
Fig. 1Scatter plot of survey questions ranked by average score against odds ratio for association with overall satisfaction question.
One limitation of our study is that we ran one analysis including all clinicians, and did not examine to see whether certain groups of clinicians differ in which survey questions are most closely associated with overall satisfaction scores. A study by Larson and Hwang evaluated how primary care, specialty care, and urgent/emergency care clinicians differ in how they perceive value in radiology services.
Their work argues that referring clinicians “are not monolithic”, and instead varying groups of clinicians have different needs and value components of radiology services differently. We believe that Larson and Hwang's framework of referring clinician heterogeneity is apt and relevant. If we were to perform our regression analyses on various subgroups of referring clinicians we will likely find that certain questions are more strongly associated with overall satisfaction for primary care clinicians and others for hospitals, surgeons, internal medicine subspecialists, or ED clinicians. Invariably, the question prompts about timeliness of outpatient results would not be relevant for hospitalists, and vice versa, the timeliness of inpatient results would not be relevant for primary care clinicians. Additionally, while these results could be interesting, the results are likely to be less relevant for our purposes of identifying and prioritizing areas for improvement.
Another limitation of our study is that our response rate was 27%, which raises concerns about nonresponse bias. A higher response rate would allow for more confidence that our sample is representative of the entire population of our institution's referring clinicians. However, our response rate (27%) and number of responses (729) both compare favorably to response rates of previously published studies of radiology department referring clinician surveys. For example, recent radiology department administered referring clinician surveys at US academic medical centers include: Larson and Hwang received 514 responses for a 15% response rate, Johnson et al received 168 responses for a 23% response rate, and Mehan Jr. et al received 249 responses for a 5% response rate.
Therefore, our survey compares favorably to similarly surveyed populations. Additionally, there is a body of scholarly work that argues that representativeness is more important than response rate in ensuring validity of results.
As demonstrated by the range of specialties, practice settings, and medical credentials of the survey respondents (Table 1), our sample contains a diversity of clinician characteristics.
An additional limitation is that we used a novel survey instrument. It is possible that our questions left unassessed gaps in the radiology process map, and we may not have surveyed about an important aspect of department operations.
Finally, there are limitations to the generalizability of our results. The survey was performed at a single academic medical center that is served by a highly specialized radiology department. Our results are likely skewed by the relatively higher complexity of patients and imaging at our institution compared to centers that care for less complex patient populations. However, even if our results may have limited generalizability to other centers, we believe that the analysis we have performed can and should serve as a model for other radiology practices to adopt.
For future steps we would like to see our survey employed in different healthcare settings to assess how responses vary amongst academic centers, between academic and community practices, and across geographic regions. To what extent are clinician values the same or different across different radiology practices? How do different practices prioritization plots compare to one another? In the end, understanding how referring clinicians value radiology services will allow all practices to deliver higher quality service and continue to lead in the era of value-based healthcare.
5. Conclusion
The results of this study reveal multiple domains that are most closely associated with clinician overall satisfaction, including around reporting, service, and frictionless interfacing with the radiology department. The three most strongly associated question prompts are results/reporting overall, attending interactions, and section work most closely with. These results indicate that clinicians most strongly value the final radiology work product, the report, and their interactions with the attending radiologists, particularly within the section with which they work most closely. Additionally, we propose a model for evaluating how to prioritize improvement opportunities within a radiology department; we encourage other departments to engage in a similar exercise. Absent their own data, practices wishing to improve the experience of their referring clinicians would be prudent to consider how they can improve the quality of the final radiology report and how their radiologists can provide better personalized service to their referring clinicians.