Browsing by Author "Ivers, Noah M"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Open Access Beyond quality improvement: exploring why primary care teams engage in a voluntary audit and feedback program(2017-12-02) Wagner, Daniel J; Durbin, Janet; Barnsley, Jan; Ivers, Noah MAbstract Background Despite its popularity, the effectiveness of audit and feedback in support quality improvement efforts is mixed. While audit and feedback-related research efforts have investigated issues relating to feedback design and delivery, little attention has been directed towards factors which motivate interest and engagement with feedback interventions. This study explored the motivating factors that drove primary care teams to participate in a voluntary audit and feedback initiative. Methods Interviews were conducted with leaders of primary care teams who had participated in at least one iteration of the audit and feedback program. This intervention was developed by an organization which advocates for high-quality, team-based primary care in Ontario, Canada. Interview transcripts were coded using the Consolidated Framework for Implementation Research and the resulting framework was analyzed inductively to generate key themes. Results Interviews were completed with 25 individuals from 18 primary care teams across Ontario. The majority were Executive Directors (14), Physician leaders (3) and support staff for Quality Improvement (4). A range of motivations for participating in the audit and feedback program beyond quality improvement were emphasized. Primarily, informants believed that the program would eventually become a best-in-class audit and feedback initiative. This reflected concerns regarding existing initiatives in terms of the intervention components and intentions as well as the perception that an initiative by primary care, for primary care would better reflect their own goals and better support desired patient outcomes. Key enablers included perceived obligations to engage and provision of support for the work involved. No teams cited an evidence base for A&F as a motivating factor for participation. Conclusions A range of motivating factors, beyond quality improvement, contributed to participation in the audit and feedback program. Findings from this study highlight that efforts to understand how and when the intervention works best cannot be limited to factors within developers’ control. Clinical teams may more readily engage with initiatives with the potential to address their own long-term system goals. Aligning motivations for participation with the goals of the audit and feedback initiative may facilitate both engagement and impact.Item Open Access Mapping variation in intervention design: a systematic review to develop a program theory for patient navigator programs(2019-01-08) Desveaux, Laura; McBrien, Kerry; Barnieh, Lianne; Ivers, Noah MAbstract Background There is a great deal of variation in the design and delivery of patient navigator (PN) programs, making it difficult to design or adopt these interventions in new contexts. We (1) systematically reviewed the literature to generate a preliminary program theory to describe how patient navigator interventions are designed and delivered; and (2) describe how the resulting program theory was applied in context to inform a prototype for a patient navigator program. Methods The current study includes a secondary review of a larger systematic review. We reviewed studies included in the primary review to identify those that designed and evaluated programs to assist patients in accessing and/or adhering to care. We conducted a content analysis of included publications to describe the barriers targeted by PN interventions and the navigator activities addressing those barriers. A program theory was constructed by mapping patient navigator activities to corresponding constructs within the capability-opportunity-motivation model of behavior change (COM-B) model of behavior change. The program theory was then presented to individuals with chronic disease, healthcare providers, and system stakeholders, and refined iteratively based on feedback. Results Twenty one publications describing 19 patient navigator interventions were included. A total of 17 unique patient navigator activities were reported. The most common included providing education, facilitating referrals, providing social and emotional support, and supporting self-management. The majority of navigator activities targeted barriers to physical opportunity, including facilitating insurance claims, assistance with scheduling, and providing transportation. Across all interventions, navigator activities were designed to target a total of 20 patient barriers. Among interventions reporting positive effects, over two thirds targeted knowledge barriers, problems with scheduling, proactive re-scheduling following a missed appointment, and insurance. The final program design included a total of 13 navigator activities—10 informed by the original program theory and 3 unique activities informed by stakeholders. Conclusions There is considerable heterogeneity in intervention content across patient navigator interventions. Our results provide a schema from which to develop PN interventions and illustrate how an evidence-based model was used to develop a real-world PN intervention. Our findings also highlight a critical need to improve the reporting of intervention components to facilitate translation. Systematic review registration PROSPERO CRD42013005857Item Open Access Measurement without management: qualitative evaluation of a voluntary audit & feedback intervention for primary care teams(2019-06-24) Wagner, Daniel J; Durbin, Janet; Barnsley, Jan; Ivers, Noah MAbstract Background The use of clinical performance feedback to support quality improvement (QI) activities is based on the sound rationale that measurement is necessary to improve quality of care. However, concerns persist about the reliability of this strategy, known as Audit and Feedback (A&F) to support QI. If successfully implemented, A&F should reflect an iterative, self-regulating QI process. Whether and how real-world A&F initiatives result in this type of feedback loop are scarcely reported. This study aimed to identify barriers or facilitators to implementation in a team-based primary care context. Methods Semi-structured interviews were conducted with key informants from team-based primary care practices in Ontario, Canada. At the time of data collection, practices could have received up to three iterations of the voluntary A&F initiative. Interviews explored whether, how, and why practices used the feedback to guide their QI activities. The Consolidated Framework for Implementation Research was used to code transcripts and the resulting frameworks were analyzed inductively to generate key themes. Results Twenty-five individuals representing 18 primary care teams participated in the study. Analysis of how the A&F intervention was used revealed that implementation reflected an incomplete feedback loop. Participation was facilitated by the reliance on an external resource to facilitate the practice audit. The frequency of feedback, concerns with data validity, the design of the feedback report, the resource requirements to participate, and the team relationship were all identified as barriers to implementation of A&F. Conclusions The implementation of a real-world, voluntary A&F initiative did not lead to desired QI activities despite substantial investments in performance measurement. In small primary care teams, it may take long periods of time to develop capacity for QI and future evaluations may reveal shifts in the implementation state of the initiative. Findings from the present study demonstrate that the potential mechanism of action of A&F may be deceptively clear; in practice, moving from measurement to action can be complex.