Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
Problems with Ferguson et al, owing to scaling and granularity.
NAM Perspectives, 2015
Background: Despite greater spending on health care and biomedical research, the United States has poorer health outcomes than competitive nations. Information is needed on the potential impact of interventions to better guide resources allocation. Objective: To assess whether research on interventions is concentrated in areas with the greatest potential population health benefit. Design: Secondary data analysis to perform a best-case study of the potential population impact of published intervention studies. Study selection: A random sample of 20 intervention studies published in the New England Journal of Medicine in 2011. Data extraction: One reviewer extracted data using a standardized form, and another reviewer verified the data. Measurements: The incremental gain of applying the intervention versus the control estimated in quality-adjusted life years (QALY) at the population level. Results: Of the 20 studies, 13 had a statistically significant effect size, and 3 studies accounted for 80 percent of the total population health impact. Studies of less common conditions had smaller population health impact, though greater individual level impact. Studies generally did not report the information required to estimate the anticipated population health impact. Limitations: The heterogeneity of outcome measures and the use of multiple data sources result in a large degree of uncertainty in the estimates. The use of an intervention effect measured in a study setting is likely to overestimate its real-world impact. Although random, the sample of studies selected here may not be representative of intervention studies in general. Conclusions: Research priorities should be heavily informed by the potential population health impact. Researchers, proposal reviewers, and funders should understand those impacts before intervention studies are initiated. We recommend that this information be uniformly included in research proposals and reports. * Indicates articles describing a secondary analysis of a study; n/a, not applicable based on the criterion that there was no significant gain in effect between intervention and control procedure; HR, hazard ratio. Only the first author's name of each study is cited here; see references for full citations.
South African Crime Quarterly, 2015
BMJ (Clinical research ed.), 2013
Objectives To evaluate the completeness of descriptions of non-pharmacological interventions in randomised trials, identify which elements are most frequently missing, and assess whether authors can provide missing details.
Journal of Clinical Epidemiology, 2013
Objectives: The goal of this systematic review was to evaluate if the influence of methodological features on treatment effect differs between types of intervention.
Journal of Clinical Epidemiology, 2020
HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L'archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Distributed under a Creative Commons Attribution-NonCommercial| 4.0 International License
World Development, 2020
Take-down policy If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.
Informed decision-making is increasingly being adopted by global healthcare policy to improve evidence based treatment decisions yet its uptake into clinical practice has been slow. Whilst current clinical practice guidelines and synthesised products (e.g. systematic reviews, evidence summaries) increase access to evidence of treatment effects, they often fail to provide additional information needed to make a fully informed choice. Moreover, a lack of brevity and the use of research jargon are barriers to their implementation in practice. As well as access to information about the benefits and harms of healthcare interventions, consumers also need additional information such as treatment costs, dosage, and the quality of evidence underpinning treatment effects in order to make informed choices. Currently, there are few examples of clinical practice guidelines or synthesised products that present this additional information in one-place and in a concise manner. Here we present such a method and describe how we developed the Evidence of Effects Page. The Evidence of Effects Page is a unique one-page summary of evidence that presents information not only on treatment effects and harms, but also precision of estimates, the quality of evidence and treatment costs. Our methodology can be applied to create additional Evidence of Effects Pages for treatments and interventions for other medical conditions as required. Compared with current products, the Evidence of Effects Page provides additional information for making decisions about healthcare treatments and has potential to facilitate greater adoption of informed decision-making and patient centered care in clinical practice.
Health Services Research, 2010
Objective. To determine whether investigations of heterogeneity of treatment effects (HTE) in randomized-controlled trials (RCTs) are prespecified and whether authors' interpretations of their analyses are consistent with the objective evidence. Data Sources/Study Setting. We reviewed 87 RCTs that reported formal tests for statistical interaction or heterogeneity (HTE analyses), derived from a probability sample of 541 articles. Data Collection/Extraction. We recorded reasons for performing HTE analysis; an objective classification of evidence for HTE (termed ''clinicostatistical divergence'' [CSD]); and authors' interpretations of findings. Authors' interpretations, compared with CSD, were coded as understated, overstated, or adequately stated. Principle Findings. Fifty-three RCTs (61 percent) claimed prespecified covariates for HTE analyses. Trials showed strong (6), moderate (11), weak (25), or negligible (16) evidence for CSD (29 could not be classified due to inadequate information). Authors stated that evidence for HTE was sufficient to support differential treatment in subgroups (10); warranted more research (31); was absent (21); or provided no interpretation (25). HTE was overstated in 22 trials, adequately stated in 57 trials, and understated in 8 trials. Conclusions. Inconsistencies in performance and reporting may limit the potential of HTE analysis as a tool for identifying HTE and individualizing care in diverse populations. Recommendations for future studies on the reporting and interpretation of HTE analyses are provided.