Browsing by Author "Blaine, Caroline"
Now showing 1 - 1 of 1
Results Per Page
Sort Options
Item Open Access Enhancing the uptake of systematic reviews of effects: what is the best format for health care managers and policy-makers? A mixed-methods study(2018-06-22) Marquez, Christine; Johnson, Alekhya M; Jassemi, Sabrina; Park, Jamie; Moore, Julia E; Blaine, Caroline; Bourdon, Gertrude; Chignell, Mark; Ellen, Moriah E; Fortin, Jacques; Graham, Ian D; Hayes, Anne; Hamid, Jemila; Hemmelgarn, Brenda; Hillmer, Michael; Holmes, Bev; Holroyd-Leduc, Jayna; Hubert, Linda; Hutton, Brian; Kastner, Monika; Lavis, John N; Michell, Karen; Moher, David; Ouimet, Mathieu; Perrier, Laure; Proctor, Andrea; Noseworthy, Thomas; Schuckel, Victoria; Stayberg, Sharlene; Tonelli, Marcello; Tricco, Andrea C; Straus, Sharon EAbstract Background Systematic reviews are infrequently used by health care managers (HCMs) and policy-makers (PMs) in decision-making. HCMs and PMs co-developed and tested novel systematic review of effects formats to increase their use. Methods A three-phased approach was used to evaluate the determinants to uptake of systematic reviews of effects and the usability of an innovative and a traditional systematic review of effects format. In phase 1, survey and interviews were conducted with HCMs and PMs in four Canadian provinces to determine perceptions of a traditional systematic review format. In phase 2, systematic review format prototypes were created by HCMs and PMs via Conceptboard©. In phase 3, prototypes underwent usability testing by HCMs and PMs. Results Two hundred two participants (80 HCMs, 122 PMs) completed the phase 1 survey. Respondents reported that inadequate format (Mdn = 4; IQR = 4; range = 1–7) and content (Mdn = 4; IQR = 3; range = 1–7) influenced their use of systematic reviews. Most respondents (76%; n = 136/180) reported they would be more likely to use systematic reviews if the format was modified. Findings from 11 interviews (5 HCMs, 6 PMs) revealed that participants preferred systematic reviews of effects that were easy to access and read and provided more information on intervention effectiveness and less information on review methodology. The mean System Usability Scale (SUS) score was 55.7 (standard deviation [SD] 17.2) for the traditional format; a SUS score < 68 is below average usability. In phase 2, 14 HCMs and 20 PMs co-created prototypes, one for HCMs and one for PMs. HCMs preferred a traditional information order (i.e., methods, study flow diagram, forest plots) whereas PMs preferred an alternative order (i.e., background and key messages on one page; methods and limitations on another). In phase 3, the prototypes underwent usability testing with 5 HCMs and 7 PMs, 11 out of 12 participants co-created the prototypes (mean SUS score 86 [SD 9.3]). Conclusions HCMs and PMs co-created prototypes for systematic review of effects formats based on their needs. The prototypes will be compared to a traditional format in a randomized trial.