Comparative Effectiveness Research
The Institute of Medicine defines comparative effectiveness research as the generation and synthesis of evidence that compares the benefits and harms of alternative methods to prevent, diagnose, treat and monitor a clinical condition or to improve the delivery of care. The purpose of comparative effectiveness research is to assist consumers, clinicians, purchasers and policy makers to make informed decisions that will improve health care at both the individual and population levels. To meet this end, the focus of comparative effectiveness research is on real-world settings as opposed to laboratory settings .
Different types of research studies can constitute comparative effectiveness research, including randomised clinical trials, observational studies, systematic reviews and meta-analyses. Each type of study design, with its particular strengths and limitations, has an important and complementary role in comparative effectiveness research.
Challenges in Methodology
Manufacturers of new drugs need to demonstrate that their products are efficacious and safe for a defined group of patients to obtain market approval. However, demonstrating these outcomes relative to existing therapies is required by regulators only when use of placebo is deemed unethical. Therefore, comparative assessment is often conducted or made available after the therapy is already on the market. A lack of early comparative efficacy evidence can result in the widespread use of potentially less efficacious and unsafe drugs.
There are some methodological considerations for generating comparative efficacy/effectiveness evidence, which might be leading to this current situation.
- Randomised controlled studies: considered as the gold standard for assessing comparative efficacy. However, it has difficulties such as large enough and long enough follow-ups to detect meaningful differences in outcomes. Also, the strict protocol of the study might disregard the external validity of the results. There are sub-categories of randomised controlled studies such as adaptive clinical trials, pragmatic clinical trials to overcome these difficulties, but these face their own difficulties in a different way.
- Observational studies: is the most common form of comparative effectiveness study. However, as well known, observational studies face difficulties such as confounding, precision and other disease-specific or data-related issues.
Challenges in Application
Despite widespread enthusiasm about the potential impact of new investments in comparative effectiveness research, it has been shown that scientific evidence is slow to change clinical practice. Five of the main reasons causing this difficulties are:
1. Misalignment of financial incentives
Economic incentives, including the pervasiveness of both fee-for-service reimbursement and generous insurance coverage, are among the most commonly cited factors that lead providers and patients to select treatments that are inconsistent with evidence from comparative effectiveness research. Perverse financial incentives push both patients and providers to disregard the evidence and pursue aggressive treatments even if they are no more effective than more conservative treatment approaches. Economics incentives also influence other steps of the translation pathway including subsequent interpretation, formalisation, and dissemination of the evidence. For example, pharmaceutical and device manufactures have commonly used a variety of approaches-such as paying key opinion leaders to disseminate favourable messages to their peers or through the mass media, detailing programs to educate clinicians about medical technologies, and direct-to-consumer advertising- to influence interpretation of the evidence and adoption.
2. Ambiguity of results
The number of outcomes assessed in a typical study and varying opinions about clinically meaningful effect sizes pose major barriers to reaching consensus on study results. without consensus on evidentiary standards prior to the release of comparative effectiveness results, ambiguous results become fuel for competing interpretations, making it difficult for providers, insurers, and policy makers to act on the evidence. Also, methodological critiques of comparative effectiveness studies, whether or not they are justified, may complicate the interpretations of the results. Typically, in attempting to replicate real-world practice settings, these studies may sacrifice internal validity.
3.Cognitive biases in interpreting new information
Three common cognitive biases that affect the processing of new information, and therefore affect the application of evidence based on comparative effectiveness research are:
Confirmation bias: a tendency to embrace evidence that confirms pre-conceived ideas and to reject contrary evidence.
Pro-intervention bias: a tendency to choose action over inaction even if the marginal benefit of action is very small.
Pro-technology bias: a uncritical tendency to believe that newer forms of technology are superior.
4.Failure to address the needs of end users
Clinicians, patients, and policy makers may have different expectations about the goals and uses of comparative effectiveness research. Although a single comparative effectiveness study cannot realistically meet the information needs of all stakeholders, an explicit consensus on the goals of each study could minimize variability in the interpretation of results. As a tendency, comparative effectiveness research tends to answer questions related to decision making at relatively downstream decision points in the referral process. At these later stages, financial incentives may exert stronger effects on decision making than at earlier stages-when primary care providers and patients may be less inclined to choose an intervention. Thus, results that influence upstream decisions may have greater leverage in changing clinical practice.
5.Limited use of decision support
Clinical decision support tools and aids for shared decision making may promote treatment approaches that are better aligned with evidence from comparative effectiveness research, but these tools and aids are not widely used.