M. Managing Individual Bias and Conflict of Interest

HTA should seek to ensure that the credibility of its reports is not compromised by any significant biases or conflicts of interest. Bias and conflict of interest are distinct yet related concepts.

As described in chapter III, bias generally refers to any systematic deviation in an observation from the true nature of an event (e.g., a treatment effect in a clinical trial). Further, individual bias can refer to factors that might affect one’s interpretation of evidence or formulation of findings and recommendations. This form of bias has been defined as “views stated or positions taken that are largely intellectually motivated or arise from close identification or association of an individual with a particular point of view or the positions or perspectives of a particular group” (National Academies 2003). This may include positions taken in public statements, publications, or other media; institutional or professional affiliations; recognition for personal achievement; intellectual passion; political or ideological beliefs; or personal relationships (Knickrehm 2009). As long as such positions have some recognized scientific or policy-related merit, they need not disqualify a person from participating in research or participating in HTA. Indeed, to provide for a competent expert review panel or set of advisors for an assessment, it may be useful to represent a balance of potentially biased perspectives.

Although such stated views or positions are a potential source of bias, they do not necessarily pose a conflict of interest. However, when an individual also has a significant, directly related interest or role, such as leading a professional society, industry association, or advocacy organization that has taken the same fixed position, this may pose a conflict of interest (National Academies 2003). Conflict of interest guidelines often address matters of individual bias as well.

Conflict of interest (or competing interest) refers to “any financial or other interest which conflicts with the service of the individual [person] because it (1) could significantly impair the individual's objectivity or (2) could create an unfair competitive advantage for any person or organization” (National Academies 2003). Conflict of interest policies typically apply to current, rather than past or expired interests, or possible interests that may arise in the future. In HTA, a conflict of interest could cause an individual to be biased in interpreting evidence or formulating findings and recommendations. In most instances, the existence of a significant conflict of interest pertaining to an HTA topic should disqualify an individual from participating in that HTA as a staff person, expert panel member, or consultant. However, persons with conflicts of interest may provide information to an HTA process, including relevant evidence, background information, other perspectives, or comments on draft reports.

Biases and conflicts of interest are conditions, not behaviors (Smith 2006; Thompson 1993). That is, an individual does not have to act on a bias or conflict of interest for it to exist. The existence of a bias or conflict of interest is reason for an HTA process to address them, e.g., whether to seek a balance of reasonable biases on an expert panel or to disqualify individuals with significant conflicts of interest from participating in an HTA, as appropriate.

HTA should consider the potential for conflict of interest on at least three levels:

  • Sponsors of clinical trials and other studies that are part of the body of evidence under review
  • Investigators who conducted and reported on the clinical trials and other studies that are part of the body of evidence under review
  • Health technology assessors, including staff members , expert panel members, or other experts involved in reviewing the evidence and making findings and recommendations

1. Sponsors

Health technology companies and other sponsors of primary research typically determine or influence what research is conducted as well as such aspects as designation of the intervention and control treatments, endpoints, and follow-up periods, and whether research results are submitted for publication.

Financial conflicts of interest are common in clinical trials and other biomedical research. Industry sponsorship of research has been found to be associated with restrictions on publication and data sharing (Bekelman 2003). Clinical trials and cost-effectiveness analyses that are sponsored by industry yield positive (favorable) results more often than studies that are funded or conducted by others (Barbieri 2001; Chopra 2003; Friedberg 1999; Jang 2010). Among the potential reasons for this discrepancy are that industry is more likely to withhold reports of studies with negative results (e.g., that do not demonstrate a treatment effect). Another is that industry is more likely to sponsor studies (including RCTs) designed to increase the likelihood of positive results, i.e., where there is an expectation that one intervention (e.g., a new drug or diagnostic test) is superior to the alternative intervention (Polyzos 2011). In the case of RCTs, this latter tendency could undermine the principle of equipoise for enrolling patients in an RCT, although some contend that this principle can be counterproductive to progress in clinical research (Djulbegovic 2009; Fries 2004; Veatch 2007).

An analysis of clinical trials listed in ClinicalTrials.gov database found that health technology companies sponsor trials that are largely focused on their own products, while head-to-head comparisons with different active interventions from other companies are rare. This diminishes the evidence base for assessing the relative benefits and harms of technologies for the same diseases (Lathyris 2010) and is one of the main reasons for the increased interest in comparative effectiveness research.

ClinicalTrials.gov helps protect against publication bias. Effective July 2005, the International Committee of Medical Journal Editors (ICMJE) established a requirement that, as a condition of consideration for publication, all clinical trials be entered in a public registry (not necessarily ClinicalTrials.gov) that meets specified criteria before the onset of patient enrollment. As such, a sponsor cannot wait to see the final results of a trial before deciding to submit a manuscript about it to participating journals (International Committee of Medical Journal Editors 2013).

2. Investigators

For study investigators, conflicts of interest may arise from having a financial interest (e.g., through salary support, ongoing consultancy, owning stock, owning a related patent) in a health care company (or one of its competitors) that may be affected by the results of a study or being an innovator of a technology under study. Investigator conflict of interest is reported to be prevalent among clinical trials in various fields of pharmaceutical therapy and to be associated with a greater likelihood of reporting a drug to be superior to placebo (Perlis 2005). A systematic review of research on financial conflicts of interest among biomedical researchers found that approximately one-fourth of investigators had industry affiliations, and two-thirds of academic institutions held equity in start-up companies that sponsored research performed at the same institutions (Bekelman 2003).

Investigators with conflicts are more likely to report positive findings. This may arise from such factors as preferential funding of research that is likely to report positive findings, biased study designs, investigators’ biased interpretation of results, or suppression of negative results (Okike 2008). As this research often appears in influential, “high-impact” journals, editors have adopted more systematic requirements for disclosure by investigators of their financial interests and the funding sources of studies, and applying greater scrutiny when potential conflicts arise (International Committee of Medical Journal Writers 1993; Kassirer 1993; Lo 2000; Jagsi 2009). Such requirements also have been applied to economic analyses (Kassirer 1994), although accompanied by controversy regarding whether certain sponsors (e.g., for-profit vs. not-for-profit) or methods (e.g., pharmacoeconomic modeling) are more acceptable than others (Schulman 1995; Steinberg 1995).

3. Health Technology Assessors

When interpreting the available evidence, health technology assessors should consider the existence of potential conflicts of interest that may have affected the conduct of a study or presentation of results. In addition, those participating in HTA should be subject to provisions that protect against their own potential conflicts of interest.

When interpreting evidence, HTA programs should consider information about sponsorship of a study, investigators, or other factors that suggest the potential for conflict of interest. Studies that are subject to potential conflicts of interest may have to be given less weight or dropped from the body of evidence under consideration.

For purposes of those conducting or otherwise involved in HTA, INAHTA defines conflict of interest as:

A situation in which the private interests of someone involved in the assessment or evaluation process (e.g. interviewer, rater, scorer, evaluator) have an impact (either positive or negative) on the quality of the evaluation activities, the accuracy of the data, or the results of the evaluation (INAHTA 2006).

Financial conflicts may include holding stock in, serving as a consultant to, or receiving honoraria from health technology companies or other organizations (e.g., medical professional groups) with financial interests in particular medical procedures or other technologies. Conflicts may be personal, i.e., apply to individuals associated with the HTA program and their immediate family members. Conflicts may also be non-personal, e.g., financial benefits to one’s organization (e.g., university) or an industry-endowed fellowship held by an individual. Conflicts may be specific to a given technology or non-specific, such as a different technology made by the same company that makes the one being assessed.

HTA programs should take active measures, including adoption and implementation of formal guidelines or requirements, to protect against potential conflicts of interest among their managers, analysts, and expert panel members (Fye 2003; Phillips 1994). Similar measures should apply, as appropriate, to HTA program consultants, contractors, and outside reviewers of draft HTA reports. For example, as part of its extensive conflict of interest policy, ECRI Institute, a US-based independent nonprofit organization that conducts HTA, examines each employee’s federal income tax return forms after they are filed to ensure that its employees do not own stock shares in medical device or pharmaceutical firms (ECRI Institute 2014). In addition to minimizing potential conflicts of interest, HTA programs should take active measures to minimize or balance bias among assessment teams and panel members.

HTA programs may have guidelines regarding when certain types of conflict affecting an individual require withdrawal (recusal) from the assessment process and when disclosure of the conflict is sufficient and participation is still permitted. This can involve various aspects or stages of HTA, including priority setting of HTA topics, selecting literature and data sources (including confidential versus open access data) for assessment, and preparing the assessment report. The INAHTA Checklist for HTA Reports includes a question regarding whether an HTA report provides a statement regarding conflict of interest on the part of those who prepared an HTA report or if funding for the HTA was provided by sources other than those responsible for the HTA agency’s usual budget (INAHTA 2007).

References for Chapter X

Anderson JE, Jorenby DE, Scott WJ, Fiore MC. Treating tobacco use and dependence: an evidence-based clinical practice guideline for tobacco cessation. Chest. 2002;121(3):932-41. PubMed

Asch SM, Sloss EM, Hogan C, Brook RH, Kravitz RL. Measuring underuse and necessary care among elderly Medicare beneficiaries using inpatient and outpatient claims. JAMA. 2000;284(18):2325-33. PubMed | PMC free article

Balint M, et al. Treatment or Diagnosis: A Study of Repeat Prescriptions in General Practice. Philadelphia, PA: JB Lippincott; 1970.

Barbieri M, Drummond MF. Conflict of interest in industry-sponsored economic evaluations: real or imagined? Curr Oncol Rep. 2001;3(5):410-3. PubMed

Basch E, Prestrud AA, Hesketh PJ, Kris MG, et al. ; American Society of Clinical Oncology. Antiemetics: American Society of Clinical Oncology clinical practice guideline update. J Clin Oncol. 2011;29(31):4189-98. PubMed

Bastian H, Scheibler F, Knelangen M, Zschorlich B, et al. Choosing health technology assessment and systematic review topics: the development of priority-setting criteria for patients' and consumers' interests. Int J Technol Assess Health Care. 2011;27(4):348-56. PubMed

Beebe DB, Rosenfeld AB, Collins N. An approach to decisions about coverage of investigational treatments. HMO Practice. 1997;11(2):65-7. PubMed

Bekelman JE, Li Y, Gross CP. Scope and impact of financial conflicts of interest in biomedical research: a systematic review. JAMA. 2003;289(4):454-65. PubMed

Berger A. High dose chemotherapy offers little benefit in breast cancer. BMJ. 1999 May 29;318(7196):1440. PubMed

Berger RL, Celli BR, Meneghetti AL, Bagley PH, et al. Limitations of randomized clinical trials for evaluating emerging operations: the case of lung volume reduction surgery. Ann Thorac Surg. 2001;72(2):649-57. PubMed

Berwick DM. What 'patient-centered' should mean: confessions of an extremist. Health Aff (Millwood). 2009;28(4):w555-65. PubMed

Bower JL, Christensen CM. Disruptive technologies: catching the wave. Harv Bus Rev. 1995;73:43-53.

Brenner M, Jones B, Daneschvar HL, Triff S. New National Emphysema Treatment Trial paradigm of health care financing administration-sponsored clinical research trials: advances and dilemmas. J Investig Med. 2002;50(2):95-100. PubMed

Brouwers MC, Kho ME, Browman GP, Burgers JS, et al. ; AGREE Next Steps Consortium. AGREE II: advancing guideline development, reporting and evaluation in health care. J Clin Epidemiol. 2010;63(12):1308-11. PubMed

Carlson JJ, Sullivan SD, Garrison LP, Neumann PJ, Veenstra DL. Linking payment to health outcomes: a taxonomy and examination of performance-based reimbursement schemes between healthcare payers and manufacturers. Health Policy. 2010;96(3):179-90. PubMed

Centers for Disease Control and Prevention (CDC). National and state vaccination coverage among adolescents aged 13 through 17 years — United States, 2012. MMWR Morb Mortal Wkly Rep. 2013;62(34):685-93. PubMed | Publisher online copy

Centers for Disease Control and Prevention (CDC). National, state, and local area vaccination coverage among children aged 19-35 months — United States, 2012. MMWR Morb Mortal Wkly Rep. 2013 Sep 13;62(36):733-40. PubMed | Publisher online copy

Centers for Disease Control and Prevention (CDC). Vital signs: awareness and treatment of uncontrolled hypertension among adults – United States, 2003-2010. MMWR Morb Mortal Wkly Rep. 2012;61(35):703-9. PubMed | Publisher online copy

Based on data from the US National Health and Nutrition Examination Survey (NHANES) 2003-2010. Publisher online copy

Chopra SS. Industry funding of clinical trials: Benefit or bias? JAMA. 2003;290(1):113. PubMed

Cochrane Consumer Network. About the Cochrane Consumer Network (CCNet). Accessed Jan. 2, 2014 at: http://consumers.cochrane.org/healthcare-users-cochrane

Cooper JD. Paying the piper: the NETT strikes a sour note. National Emphysema Treatment Trial. Ann Thorac Surg. 2001; Aug;72(2):330-3. PubMed

Danner M, Hummel JM, Volz F, van Manen JG, et al. Integrating patients' views into health technology assessment: Analytic hierarchy process (AHP) as a method to elicit patient preferences. Int J Technol Assess Health Care. 2011;27(4):369-75. PubMed

Deyo RA. Cascade effects of medical technology. Annu Rev Public Health. 2002;23:23-44. PubMed

Deyo RA, Psaty BM, Simon G, Wagner EH, Omenn GS. The messenger under attack − intimidation of researchers by special-interest groups. N Engl J Med. 1997;336(16):1176-80. PubMed

Domecq JP, Prutsky G, Elraiyah T, Wang Z, et al. Patient engagement in research: a systematic review. BMC Health Serv Res. 2014;14:89. PubMed | PMC free article

Djulbegovic B. The paradox of equipoise: the principle that drives and limits therapeutic discoveries in clinical research. Cancer Control. 2009;16(4):342-7. PubMed | PMC free article

Donabedian A. Quality assessment and assurance: unity of purpose, diversity of means. Inquiry. 1988;25(1):173-92. PubMed

Ebell MH, Siwek J, Weiss BD, Woolf SH, et al. Strength of recommendation taxonomy (SORT): a patient-centered approach to grading evidence in the medical literature. J Am Board Fam Pract. 2004;17(1):59-67. PubMed | Publisher free article

ECRI Health Technology Assessment Information Service. High-dose chemotherapy with autologous bone marrow transplantation and/or blood cell transplantation for the treatment of metastatic breast cancer. Plymouth Meeting, PA : ECRI, 1995.

ECRI Institute. Policies and Mission Statement. Accessed February 1, 2014 at: https://www.ecri.org/About/Pages/institutepolicies.aspx.

Eichler HG, Bloechl-Daum B, Abadie E, Barnett D, et al. Relative efficacy of drugs: an emerging issue between regulatory agencies and third-party payers. Nat Rev Drug Discov. 2010;9(4):277-91. PubMed

Ell K, Katon W, Xie B, Lee PJ, et al. One-year postcollaborative depression care trial outcomes among predominantly Hispanic diabetes safety net patients. Gen Hosp Psychiatry. 2011;33(5):436-42. PubMed PMC free article

Epstein RM, Street RL Jr. The values and value of patient-centered care. Ann Fam Med 2011;9(2):100-3. PubMed | PMC free article

EUnetHTA Joint Action WP5 – Relative Effectiveness Assessment (REA) of Pharmaceuticals – Model for Rapid Relative Effectiveness Assessment of Pharmaceuticals, 1 March 2013 – V3.0. Accessed December 11, 2013 at: http://www.eunethta.eu/sites/5026.fedimbo.belgium.be/files/Model%20for%20Rapid%20REA%20of%20pharmaceuticals_final_20130311_reduced.pdf.

European Patients’ Forum. Patient Involvement in Health Technology Assessment. (Undated) Accessed Dec. 1, 2013 at: http://www.eu-patient.eu/Documents/Projects/HTA/EPF-report_HTA-survey_HTA-agencies.pdf.

Facey K, Boivin A, Gracia J, Hansen HP, et al. Patients' perspectives in health technology assessment: a route to robust evidence and fair deliberation. Int J Technol Assess Health Care. 2010;26(3):334-40. PubMed

Federal Coordinating Council for Comparative Effectiveness Research. Report to the President and the Congress. Washington, DC: US Department of Health and Human Services, June 2009. Accessed December 5, 2016 at: http://osp.od.nih.gov/sites/default/files/resources/FCCCER%20Report%20to%20the%20President%20and%20Congress%202009.pdf.

Ferguson TB Jr, Peterson ED, Coombs LP, Eiken MC, et al. Use of continuous quality improvement to increase use of process measures in patients undergoing coronary artery bypass graft surgery: A randomized controlled trial. JAMA. 2003;290(1):49-56. PubMed

Fineberg HV. Keynote Address. Health Technology Assessment International 2009 Annual Meeting, Singapore; June 26, 2009.

Fletcher SW. Whither scientific deliberation in health policy recommendations? N Engl J Med. 1997;336(16):1180-3. PubMed

Foulds J. Effectiveness of smoking cessation initiatives. Smoking cessation services show good return on investment. BMJ. 2002 Mar 9;324(7337):608-9. PubMed

Friedberg M, Saffran B, Stinson TJ, et al. Evaluation of conflict of interest in economic analyses of new drugs used in oncology. JAMA. 1999;282(15):1453-7. PubMed

Fries JF, Krishnan E. Equipoise, design bias, and randomized controlled trials: the elusive ethics of new drug development. Arthritis Res Ther. 2004;6(3):R250-5. PubMed | PMC free article

Fuchs VR. The doctor's dilemma--what is "appropriate" care? N Engl J Med. 2011;365(7):585-7. PubMed

Fye WB. The power of clinical trials and guidelines, and the challenge of conflicts of interest. J Am Coll Cardiol. 2003;41(8):1237-42. PubMed

Gann MJ, Restuccia JD. Total quality management in health care: a view of current and potential research. Med Care Rev. 1994;51(4):467-500. PubMed

GAO (General Accounting Office). Medicare: Beneficiary use of clinical preventive services. Report to the Chairman, Subcommittee on Oversight and Investigations, Committee on Energy and Commerce, House of Representatives. GAO-02-422. Washington, DC; 2002.

Garrison LP Jr, Bresnahan BW, Higashi MK, et al. Innovation in diagnostic imaging services: assessing the potential for value-based reimbursement. Acad Radiol. 2011;18(9):1109-14. PubMed

Glaeske G. The dilemma between efficacy as defined by regulatory bodies and effectiveness in clinical practice. Dtsch Arztebl Int. 2012;109(7):115-6. PubMed | PMC free article

Goodman CS. Healthcare technology assessment: methods, framework, and role in policy making. Am J Manag Care. 1998;4:SP200-14. PubMed | Publisher free article

Goodman C, Snider G, Flynn K. Health Care Technology Assessment in VA. Boston, Mass: Management Decision and Research Center. Washington, DC: Health Services Research and Development Service; 1996.

Green C. Considering the value associated with innovation in health technology appraisal decisions (deliberations): a NICE thing to do? Appl Health Econ Health Policy. 2010;8(1):1-5. PubMed

Hailey D. A preliminary survey on the influence of rapid health technology assessments. Int J Technol Assess Health Care. 2009;25(3):415-8. PubMed

Harker J, Kleijnen J. What is a rapid review? A methodological exploration of rapid reviews in Health Technology Assessments. Int J Evid Based Healthc. 2012;10(4):397-410. PubMed

Health Equality Europe. Understanding Health Technology Assessment (HTA). July 2008. Accessed Jan. 2, 2014 at: http://www.htai.org/fileadmin/HTAi_Files/ISG/PatientInvolvement/EffectiveInvolvement/HEEGuideToHTAforPatientsEnglish.pdf.

Heidenreich PA, Trogdon JG, Khavjou OA, et al. Forecasting the future of cardiovascular disease in the United States: a policy statement from the American Heart Association. Circulation. 2011;123(8):933–44. PubMed | Publisher free article

Henshall C, Koch P, von Below GC, Boer A, et al. Health technology assessment in policy and practice. Int J Technol Assess Health Care. 2002;18(2):447-55. PubMed

Higgins JPT, Green S, eds. Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 [updated March 2011]. The Cochrane Collaboration, 2011. Accessed Sept. 1, 2013 at: http://handbook.cochrane.org.

Hinman AR, Orenstein WA, Santoli JM, Rodewald LE, Cochi SL. Vaccine shortages: history, impact, and prospects for the future. Annu Rev Public Health. 2006;27:235-59. PubMed

Hoffman B. Is there a technological imperative in health care? Int J Technol Assess Health Care. 2002;18(3):675-89.

HTAi Patient and Citizen Involvement Interest Sub-Group. Good Practice Examples of PPI. 2012. Accessed Jan. 2, 2014 at: http://www.htai.org/fileadmin/HTAi_Files/ISG/PatientInvolvement/Materials/Good_Practice_Examples.doc.

HTAi Patient and Citizen Involvement Interest Sub-Group. PIE Good Practice Principles for Patient Involvement in Health Technology Assessment—Draft. August 2012. Accessed Jan. 2, 2014 at: http://www.htai.org/fileadmin/HTAi_Files/ISG/PatientInvolvement/Materials/PIE_principles_2012_august.pdf.

HTAi Patient and Citizen Involvement Interest Sub-Group. Good Practice Examples of Patient and Public Involvement in Health Technology Assessment. Sept. 2013. Accessed Jan. 2, 2014 at: http://www.htai.org/fileadmin/HTAi_Files/ISG/PatientInvolvement/GeneralSIGdocuments/Good_Practice_Examples_September_2013.pdf.

Hu Q, Schwarz LB, Uhan NA. The impact of group purchasing organizations on healthcare-product supply chains. MSOM. 2012;14(1):7-23. PubMed

Hudon C, Fortin M, Haggerty JL, et al. Measuring patients' perceptions of patient-centered care: a systematic review of tools for family medicine. Ann Fam Med. 2011;9(2):155-64. PubMed | PMC free article

Hutton J, Trueman P, Henshall C. Coverage with evidence development: an examination of conceptual and policy issues. Int J Technol Assess Health Care. 2007;23(4):425-32. PubMed

International Committee of Medical Journal Editors. Uniform Requirements for Manuscripts Submitted to Biomedical Journals: Publishing and Editorial Issues Related to Publication in Biomedical Journals: Obligation to Register Clinical Trials. 2013. Accessed June 20, 2014 at: http://www.icmje.org/recommendations/browse/publishing-and-editorial-issues/clinical-trial-registration.html.

INAHTA. International Network of Agencies in Health Technology Assessment Secretariat. A Checklist for Health Technology Assessment Reports. Version 3.2. August 2007. Accessed February 29, 2016 at: http://www.inahta.org/wp-content/uploads/2014/04/INAHTA_HTA_Checklist_English.pdf.

INAHTA. International Network of Agencies for Health Technology Assessment. Health Technology Assessment (HTA) Glossary. First Edition. INAHTA Secretariat, c/o SBU, Stockholm, July 5, 2006. Accessed June 20, 2013 at: http://medweb4.bham.ac.uk/websites/wmhtac/handbook/sops/pdfs/INAHTA_glossary2006.pdf.

Institute of Medicine. Committee on Comparative Effectiveness Prioritization. Initial National Priorities for Comparative Effectiveness Research. Washington, DC: National Academies Press; 2009. http://books.nap.edu/openbook.php?record_id=12648.

Institute of Medicine, Committee on Quality of Health Care in America. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academy Press; 2001. http://books.nap.edu/openbook.php?record_id=10027.

International Committee of Medical Journal Writers. Conflict of interest. Lancet. 1993;341(8847):742-3. PubMed

Jacobs BL, Zhang Y, Schroeck FR, et al. Use of advanced treatment technologies among men at low risk of dying from prostate cancer. JAMA. 2013;309(24):2587-95. PubMed | PMC free article

Jagsi R, Sheets N, Jankovic A, Motomura AR, Amarnath S, Ubel PA. Frequency, nature, effects, and correlates of conflicts of interest in published clinical cancer research. Cancer. 2009;115(12):2783-91. PubMed | Publisher free article

Jang S, Chae YK, Haddad T, Majhail NS. Conflict of interest in economic analyses of aromatase inhibitors in breast cancer: a systematic review. Breast Cancer Res Treat. 2010;121(2):273-9. PubMed

Kaden RJ, Vaul JH, Palazola PA. Negotiating payment for new technology purchases. Healthc Financ Manage. 2002;56(12):44-8. PubMed

Kassirer JP, Angell M. Financial conflicts of interest in biomedical research. N Engl J Med. 1993;329(8):570-1. PubMed | Publisher free article

Kassirer JP, Angell M. The journal's policy on cost-effectiveness analyses. N Engl J Med. 1994;331(10):669-70. PubMed | Publisher free article

Kennedy I. Appraising the Value of Innovation and Other Benefits. A Short Study for NICE. July 2009. Accessed February 29, 2016 at: https://www.nice.org.uk/Media/Default/About/what-we-do/Research-and-development/Kennedy-study-final-report.pdf.

Khangura S, Konnyu K, Cushman R, et al. Evidence summaries: the evolution of a rapid review approach. Syst Rev. 2012;1:10. PubMed | PMC free article

Kim S, Losina E, Solomon DH, Wright J, Katz JN. Effectiveness of clinical pathways for total knee and total hip arthroplasty: literature review. J Arthroplasty. 2003;18(1):69-74. PubMed | PMC free article

Klemp M, Frønsdal KB, Facey K; HTAi Policy Forum. What principles should govern the use of managed entry agreements? Int J Technol Assess Health Care 2011;27(1):77-83. PubMed

Knickrehm S. Non-Financial Conflicts of Interest. Slide Presentation from the AHRQ 2009 Annual Conference. December 2009. Agency for Healthcare Research and Quality, Rockville, MD. Accessed February 29, 2016 at: http://archive.ahrq.gov/news/events/conference/2009/knickrehm/index.html.

Kravitz RL, Duan N, Braslow J. Evidence-based medicine, heterogeneity of treatment effects, and the trouble with averages. Milbank Q 2004;82(4):661-87. PubMed | PMC free article

Kreis J, Schmidt H. Public engagement in health technology assessment and coverage decisions: a study of experiences in France, Germany, and the United Kingdom. J Health Polit Policy Law. 2013;38(1):89-122. PubMed

Kwan J, Sandercock P. In-hospital care pathways for stroke: a Cochrane systematic review. Stroke. 2003;34(2):587-8. PubMed | Publisher free article

Lathyris DN, Patsopoulos NA, Salanti G, Ioannidis JP. Industry sponsorship and selection of comparators in randomized clinical trials. Eur J Clin Invest. 2010;40(2):172-82. PubMed

Lee A, Skött LS, Hansen HP. Organizational and patient-related assessments in HTAs: state of the art. Int J Technol Assess Health Care. 2009;25(4):530-6. PubMed

The Lewin Group. The Clinical Review Process Conducted by Group Purchasing Organizations and Health Systems. Prepared for the Health Industry Group Purchasing Association, April 2002.

The Lewin Group. Outlook for Medical Technology Innovation. Report 2: The Medicare Payment Process and Patient Access to Technology. Washington, DC: AdvaMed; 2000.

Lo B, Wolf LE, Berkeley A. Conflict-of-interest policies for investigators in clinical trials. N Engl J Med. 2000;343(22):1616-20. PubMed

Lohr KN, ed. Institute of Medicine. Medicare: a Strategy for Quality Assurance. Volume I. Washington, DC. National Academy Press; 1990. http://www.nap.edu/openbook.php?record_id=1547&page=1.

Lohr KN, Rettig RA, eds. Quality of Care and Technology Assessment. Report of a Forum of the Council on Health Care Technology. Washington, DC: National Academy Press; 1988.

Mangione-Smith R, DeCristofaro AH, Setodji CM, Keesey J, et al. The quality of ambulatory care delivered to children in the United States. N Engl J Med. 2007;357(15):1515-23. PubMed | PMC free article

McDonald IG. Quality assurance and technology assessment: pieces of a larger puzzle. J Qual Clin Pract. 2000;20(2-3):87-94. PubMed

McGivney WT. Proposal for assuring technology competency and leadership in medicine. J Natl Cancer Inst. 1992;84(10):742-5. PubMed

McGlynn EA, Asch SM, Adams J, Keesey J, Hicks J, DeCristofaro A, Kerr EA. The quality of health care delivered to adults in the United States. N Engl J Med. 2003;348(26):2635-45. PubMed

McNeil BJ. Shattuck Lecture −Hidden barriers to improvement in the quality of care. N Engl J Med. 2001;345(22):1612-20. PubMed

Mead N, Bower P. Patient-centredness: a conceptual framework and review of the empirical literature. Soc Sci Med 2000;51(7):1087-110. PubMed

Medical Technology Leadership Forum. MTLF Summit: Conditional Coverage of Investigational Technologies. Prepared by The Lewin Group. Washington, DC; October 1999.

Mello MM, Brennan TA. The controversy over high-dose chemotherapy with autologous bone marrow transplant for breast cancer. Health Aff (Millwood). 2001;20(5):101-17. PubMed | Publisher free article

Methodology Committee of the Patient-Centered Outcomes Research Institute. Methodological standards and patient-centeredness in comparative effectiveness research. The PCORI perspective. JAMA. 2012;307(15):1636-40. PubMed

Methods Guide for Effectiveness and Comparative Effectiveness Reviews. AHRQ Publication No. 10(14)-EHC063-EF. Rockville, MD: Agency for Healthcare Research and Quality. January 2014. Accessed Feb. 1, 2014 at: http://effectivehealthcare.ahrq.gov/ehc/products/60/318/CER-Methods-Guide-140109.pdf.

Miller D, Rudick RA, Hutchinson M. Patient-centered outcomes: translating clinical efficacy into benefits on health-related quality of life. Neurology. 2010 Apr 27;74 Suppl 3:S24-35. PubMed

Miller FG, Pearson SD. Coverage with evidence development: ethical issues and policy implications. Med Care 2008;46(7):746-51. PubMed

Nakamura C, Bromberg M, Bhargava S, Wicks P, Zeng-Treitler Q. Mining online social network data for biomedical research: a comparison of clinicians' and patients' perceptions about amyotrophic lateral sclerosis treatments. J Med Internet Res. 2012;14(3):e90. PubMed | PMC free article

National Academies. Policy on Committee Composition and Balance and Conflicts of Interest for Committees Used in the Development of Reports. May 12, 2003. Accessed December 13, 2013 at: http://www.nationalacademies.org/coi/bi-coi_form-0.pdf.

National Institute for Health and Care Excellence (NICE). Citizens Council. Accessed Jan. 2, 2014 at: http://www.nice.org.uk/aboutnice/howwework/citizenscouncil/citizens_council.jsp.

National Institute for Health and Care Excellence (NICE). Guide to the Methods of Technology Appraisal. Accessed Jul. 12, 2016 at: https://www.nice.org.uk/process/pmg9/chapter/6-involvement-and-participation#patient-and-carer-groups.

Neumann PJ. What we talk about when we talk about health care costs. N Engl J Med. 2012;366(7):585-6. PubMed

Okike K, Kocher MS, Mehlman CT, Bhandari M. Industry-sponsored research. Injury. 2008;39(6):666-80. PubMed

Oxman AD, Guyatt G. A consumer’s guide to subgroup analyses. Ann Intern Med 1992;116(1):76-84. PubMed

Patient-Centered Outcomes Research Institute. Patient-centered outcomes research. 2013. Accessed December 13, 2013 at: http://pcori.org/research-we-support/pcor.

Pearson SD, Bach PB. How Medicare could use comparative effectiveness research in deciding on new coverage and reimbursement. Health Aff (Millwood). 2010;29(10):1796-804. PubMed | Publisher free article

Perlis RH, Perlis CS, Wu Y, et al. Industry sponsorship and financial conflict of interest in the reporting of clinical trials in psychiatry. Am J Psychiatry. 2005;162(10):1957-60. PubMed

Phillips WR. Clinical policies: making conflicts of interest explicit. Task force on clinical policies for patient care. American Academy of Family Physicians. JAMA. 1994;272(19):1479. PubMed

Pilnick A, Dingwall R, Starkey K. Disease management: definitions, difficulties and future directions. Bull World Health Organ. 2001;79(8):755-63. PubMed | PMC free article

Polyzos NP, Valachis A, Mauri D, Ioannidis JP. Industry involvement and baseline assumptions of cost-effectiveness analyses: diagnostic accuracy of the Papanicolaou test. CMAJ. 2011;183(6):E337-43. PubMed | PMC free article

Porter ME. What is value in health care? N Engl J Med. 2010;363(26):2477-81. PubMed

Reda AA, Kotz D, Evers SM, van Schayck CP. Healthcare financing systems for increasing the use of tobacco dependence treatment. Cochrane Database Syst Rev. 2012 Jun 13;(6):CD004305. PubMed

Rettig RA. Health Care in Transition: Technology Assessment in the Private Sector. Santa Monica, Ca: RAND; 1997. Publisher free publication

Rothrock NE, Hays RD, Spritzer K, Yount SE, et al. Relative to the general US population, chronic diseases are associated with poorer health-related quality of life as measured by the Patient-Reported Outcomes Measurement Information System (PROMIS). J Clin Epidemiol. 2010;63(11):1195-204. [PubMed] [PMC free article]

Schauffler HH, Barker DC, Orleans CT. Medicaid coverage for tobacco-dependence treatments. Health Aff (Millwood). 2001;20(1):298-303. PubMed | Publisher free article

Schulman K. Cost-effectiveness analyses. N Engl J Med. 1995;332(2):124. PubMed

Sharf BF. Out of the closet and into the legislature: breast cancer stories. Health Aff (Millwood). 2001;20(1):213-8. PubMed | Publisher free article

Sheingold, SH. Technology assessment, coverage decisions, and conflict: the role of guidelines. Am J Manag Care. 1998;4:SP117-25. PubMed | Publisher free article

Smith R. Conflicts of interest: how money clouds objectivity. J R Soc Med. 2006;99(6):292–7. PubMed | PubMed Central free article

Stead LF, Perera R, Bullen C, Mant D, et al. Nicotine replacement therapy for smoking cessation. Cochrane Database Syst Rev. 2012 Nov 14;11:CD000146. PubMed

Steinberg EP. Cost-effectiveness analyses. N Engl J Med. 1995;332(2):123. PubMed | Publisher free article

Steinberg EP, Tunis S, Shapiro D. Insurance coverage for experimental technologies. Health Aff (Millwood). 1995;14(4):143-58. PubMed |Publisher free article

Stewart M, et al. Patient-Centered Medicine: Transforming the Clinical Method. 3rd ed. United Kingdom: Radcliffe Health; 2013.

Street RL Jr, Elwyn G, Epstein RM. Patient preferences and healthcare outcomes: an ecological perspective. Expert Rev Pharmacoecon Outcomes Res. 2012;12(2):167-80. PubMed

Thompson DF). Understanding financial conflicts of interest. N Engl J Med. 1993;329(8):573-6. PubMed

Trueman P, Grainger DL, Downs KE. Coverage with evidence development: applications and issues. Int J Technol Assess Health Care 2010;26(1):79-85. PubMed

UK National Institute for Health and Care Excellence (NICE). Citizens Council. Accessed Jan. 2, 2014 at: http://www.nice.org.uk/aboutnice/howwework/citizenscouncil/citizens_council.jsp.

US Food and Drug Administration. Guidance for Industry. Patient-Reported Outcome Measures: Use in Medical Product Development to Support Labeling Claims. December 2009. Accessed January 29, 2015 at: http://www.fda.gov/downloads/Drugs/Guidances/UCM193282.pdf

Veatch RM. The irrelevance of equipoise. J Med Philos. 2007;32(2):167-83. PubMed

von Below GC, Boer A, Conde-Olasagasti JL, Dillon A, et al. Health technology assessment in policy and practice. Working Group 6 Report. Int J Technol Assess Health Care. 2002;18(2):447-55. PubMed

Wakefield DS, Wakefield BJ. Overcoming barriers to implementation of TQM/CQI in hospitals: myths and realities. QRB. Quality Review Bulletin. 1993;19(3):838. PubMed

Wang R, Lagakos SW, Ware JH, Hunter DJ, Drazen JM. Statistics in medicine--reporting of subgroup analyses in clinical trials. N Engl J Med. 2007;357(21):2189-94. PubMed | Publisher free article

Watt A, Cameron A, Sturm L, Lathlean T, et al. Rapid reviews versus full systematic reviews: an inventory of current methods and practice in health technology assessment. Int J Technol Assess Health Care. 2008;24(2):133-9. PubMed

Whitty JA. An international survey of the public engagement practices of health technology assessment organizations. Value Health. 2013;16(1):155-63. PubMed

Wilson JM. It's time for gene therapy to get disruptive! Hum Gene Ther. 2012;23(1):1-3. PubMed

Wood DE, DeCamp MM. The National Emphysema Treatment Trial: a paradigm for future surgical trials. Ann Thorac Surg. 2001;72(2):327-9. PubMed

Woolacott NF, Jones L, Forbes CA, et al. The clinical effectiveness and cost-effectiveness of bupropion and nicotine replacement therapy for smoking cessation: a systematic review and economic evaluation. Health Technol Assess. 2002;6(16):1-245. PubMed | Publisher free article

Wong MK, Mohamed AF, Hauber AB, Yang JC, et al. Patients rank toxicity against progression free survival in second-line treatment of advanced renal cell carcinoma. J Med Econ. 2012;15(6):1139-48. PubMed

Zandbelt LC, Smets EM, Oort FJ, et al. Medical specialists' patient-centered communication and patient-reported outcomes. Med Care. 2007;45(4):330-9. PubMed

Glossary

Absolute risk reduction: a measure of treatment effect that compares the probability (or mean) of a type of outcome in the control group with that of a treatment group, [i.e.: Pc- Pt(or µc- µt)]. For instance, if the results of a trial were that the probability of death in a control group was 25% and the probability of death in a treatment group was 10%, the absolute risk reduction would be (0.25 - 0.10) = 0.15. (See also number needed to treat, odds ratio, and relative risk reduction.)

Accuracy: the degree to which a measurement (e.g., the mean estimate of a treatment effect) is true or correct. An estimate can be accurate, yet not be precise, if it is based on an unbiased method that provides observations having great variation or random error (i.e., not close in magnitude to each other). (Contrast with precision.)

Adaptive licensing: (or progressive licensing) refers to proposals for prospectively planned, stepwise, and adaptive approaches to market approval for drugs, biologics, and other regulated technologies. Recognizing that the evidence available at the time of conventional market approval for many technologies is often insufficient for well-informed clinical decisions, adaptive licensing is intended to improve tradeoffs of timely patient access with accumulating evidence on safety and effectiveness. These approaches involve iterative steps of data collection, regulatory review, and modification of licensing (or labeling). For example, this would enable earlier approval (at phase II or even phase I) of a drug for narrowly defined indications while RCTs or other trials continue to generate confirmatory data, data for broader indications, or data in real-world settings that could be used to modify licensing.

Adherence: (or compliance or concordance) a measure of the extent to which patients undergo, continue to follow, or persist with a treatment or regimen as prescribed, e.g., taking drugs, undergoing a medical or surgical procedure, doing an exercise regimen, or abstaining from smoking.

Allocation concealment: refers to the process of ensuring that the persons assessing patients for potential entry into a trial, as well as the patients themselves, do not know whether any particular patient will be allocated to an intervention group or control group. This diminishes selection bias by preventing the persons who are managing patient allocation, or the patients, from influencing (intentionally or not) patient assignment to one group or another. Centralized randomization (i.e., managed at one site rather than at each enrollment site) using certain techniques is a preferred method ensuring allocation concealment. (This is distinct from blinding of patients, providers, and others.)

Adaptive clinical trial: a form of trial that uses accumulating data to determine how to modify the design of ongoing trials according to a pre-specified plan. Intended to increase the quality, speed, and efficiency of trials, adaptive trials typically involve interim analyses, changes to sample size, changes in randomization to treatment arms and control groups, and changes in dosage or regimen of a drug or other technology.

Alpha (α): the probability of a Type I (false-positive) error. In hypothesis testing, the α-level is the threshold for defining statistical significance. For instance, setting α at a level of 0.05 implies that investigators accept that there is a 5% chance of concluding incorrectly that an intervention is effective when it has no true effect. The α-level is commonly set at 0.01 or 0.05 or 0.10.

Attrition bias: refers to systematic differences between comparison groups in withdrawals (drop-outs) from a study, loss to follow-up, or other exclusions of patients and how these losses are analyzed. Ignoring these losses or accounting for them differently between groups can skew study findings, as patients who withdraw or are lost to follow-up may differ systematically from those patients who remain for the duration of the study. Patients’ awareness of whether they have been assigned to a particular treatment or control group may differentially affect their likelihood of dropping out of a trial. Techniques for diminishing attrition bias include blinding of patients as to treatment assignment, completeness of follow-up data for all patients, and intention-to-treat analysis (with imputations for missing data as appropriate).

Bayesian clinical trial: a type of adaptive clinical trial that allows for assessment of results during the course of the trial and modifying the trial design to arrive at results more efficiently. Such modifications during trials may include, e.g., changing the ratio of randomization to treatment arms to favor what appear to be more effective therapies, adding or eliminating treatment arms, changing enrollee characteristics to focus on patient subgroups that appear to be better responders, and slowing or stopping enrollment as certainty increases about treatment effects. Bayesian clinical trials are based on Bayesian statistics.

Bayesian statistics: a branch of statistics that involves learning from evidence as it accumulates. It is based on Bayes’ Rule (or Bayes’ Theorem), a mathematical equation that expresses how the prior (initial) probability of an event (or the probability that a hypothesis is true or the distribution for an unknown quantity) changes to a posterior (updated) probability given relevant new evidence. For example, in the diagnosis of a condition in an individual patient, a prior probability of a diagnosis may be based on the known prevalence of that condition in a relevant population. This can be updated to a posterior probability based on whether the result of a diagnostic test for that condition in that patient is positive or negative.

Benchmarking: a quality assurance process in which an organization sets goals and measures its performance in comparison to those of the products, services, and practices of other organizations that are recognized as leaders.

Best evidence: refers, in general, to evidence that best attains certain quality criteria for internal and external validity. This also refers to a principle that a desire to base health care decisions and policies only on evidence generated from the best study designs for establishing internal and external validity should not preclude using the best evidence that is available from other study designs. That is, the “best evidence” may be the best available evidence that is relevant for the evidence questions of interest. This does not necessarily mean that the best available evidence should be designated as being of high quality.

Beta (β): the probability of a Type II (false-negative) error. In hypothesis testing, β is the probability of concluding incorrectly that an intervention is not effective when it has true effect. (1-β) is the power to detect an effect of an intervention if one truly exists.

Bias: in general, a systematic (i.e., not due to random error) deviation in an observation from the true nature of an event. In clinical trials, bias may arise from any factor other than the intervention of interest that systematically distorts the magnitude of an observed treatment effect from the true effect. Bias diminishes the accuracy (though not necessarily the precision) of an observation. Biases may arise from inadequacies in the design, conduct, analysis, or reporting of a study. Among the main forms of bias are selection bias, performance bias, detection bias, attrition bias, reporting bias, and publication bias. Confounding of treatment effects can arise from various sources of bias.

Bias also refers to factors that may affect an individual’s interpretation of evidence or formulation of findings and recommendations, i.e., “views stated or positions taken that are largely intellectually motivated or arise from close identification or association of an individual with a particular point of view or the positions or perspectives of a particular group” (National Academies 2003). This may include positions taken in public statements, publications, or other media; institutional or professional affiliations; recognition for personal achievement; intellectual passion; political or ideological beliefs; or personal relationships.

Bibliographic database: an indexed computer or printed source of citations of journal articles and other reports in the literature. Bibliographic citations typically include author, title, source, abstract, and/or related information (including full text in some cases). Examples are MEDLINE and EMBASE.

Biomarker: (or biological marker) an objectively measured variable or trait that is used as an indicator of a normal biological process, a disease state, or effect of a treatment. It may be a physiological measurement (height, weight, blood pressure, etc. ), blood component or other biochemical assay (red blood cell count, viral load, HbA1c level, etc. ), genetic data (presence of a specific genetic mutation), or measurement from an image (coronary artery stenosis, cancer metastases, etc.). (See also intermediate endpoint and surrogate endpoint.)

Blinding: the process of preventing one or more of patients, clinicians, investigators, and data analysts from knowing whether individual patients are receiving the investigational intervention(s) or the control (or standard) intervention(s) in a clinical trial. (Also known as masking.) Blinding is intended to eliminate the possibility that knowledge of which intervention is being received will affect patient outcomes, investigator behaviors that may affect outcomes, or assessment of outcomes. Blinding is not always practical (e.g. when comparing surgery to drug treatment), but it should be used whenever it is possible and compatible with optimal patient care. The terms “single-blinded,” “double-blinded,” and “triple-blinded” refer to which parties are blinded, e.g., one or more of patients, investigators, and data analysts; however, these terms are used inconsistently and the specific parties who are blinded in a trial should be identified.

Budget impact analysis (BIA): determines the how implementing or adopting a particular technology or technology-related policy will affect a designated budget, e.g., of a drug formulary or health plan. A BIA typically does not account for the broad economic impact (e.g., societal impact) of implementing or adopting the technology. A BIA can be conducted simply to determine the impact of alternative technologies or programs on a budget, or it could be conducted to determine whether, or how much of, a technology or program (or combination of these) could be implemented subject to resource constraints, such as a fixed (or “capped”) budget.

Case-control study: a retrospective observational study designed to determine the relationship between a particular outcome of interest (e.g., disease or condition) and a potential cause (e.g., an intervention, risk factor, or exposure). Investigators identify a group of patients with a specified outcome (cases) and a group of patients without the specified outcome (controls). Investigators then compare the histories of the cases and the controls to determine the rate or level at which each group experienced a potential cause. As such, this study design leads from outcome (disease or condition) to cause (intervention, risk factor, or exposure).

Case series: see series.

Case study: an uncontrolled (prospective or retrospective) observational study involving an intervention and outcome in a single patient. (Also known as a single case report or anecdote.)

Causal pathway: also known as an analytical framework, a depiction (e.g., in a schematic) of direct and indirect linkages between interventions and outcomes. For a clinical problem, a causal pathway typically includes a patient population, one or more alternative interventions (e.g., screening, diagnosis, and/or treatment), intermediate outcomes (e.g., biological markers), and health outcomes. Causal pathways are intended to provide clarity and explicitness in defining the questions to be addressed in an assessment; they are useful in identifying pivotal linkages for which evidence may be lacking.

Citation: the record of an article, book, or other report in a bibliographic database that includes summary descriptive information, e.g., authors, title, abstract, source, and indexing terms.

Clinical endpoint: an event or other outcome that can be measured objectively to determine whether an intervention achieved its desired impact on patients. Usual clinical endpoints are mortality (death), morbidity (disease progression), symptom relief, quality of life, and adverse events. These are often categorized as primary (of most importance) endpoints and secondary (additional though not of greatest interest) endpoints.

Clinical pathway: a multidisciplinary set of daily prescriptions and outcome targets for managing the overall care of a specific type of patient, e.g., from pre-admission to post-discharge for patients receiving inpatient care. Clinical pathways often are intended to maintain or improve quality of care and decrease costs of patient care in particular diagnosis-related groups.

Clinical practice guidelines: a systematically developed statement to assist practitioner and patient decisions about appropriate health care for one or more specific clinical circumstances. The development of clinical practice guidelines can be considered to be a particular type of HTA; or, it can be considered to be one of the types of policymaking that is informed or supported by HTA.

Clinical registry trials are a type of multicenter trial design using existing online registries as an efficient platform to conduct patient assignment to treatment and control groups, maintain case records, and conduct follow-up. Such trials that randomize patient assignment to treatment and control groups are randomized clinical registry trials (see Fröbert 2010).

Clinical significance: a conclusion that an intervention has an effect that is of practical meaning to patients and health care providers. Even though an intervention is found to have a statistically significant effect, this effect might not be clinically significant. In a trial with a large number of patients, a small difference between treatment and control groups may be statistically significant but clinically unimportant. In a trial with few patients, an important clinical difference may be observed that does not achieve statistical significance. (A larger trial may be needed to confirm that this is a statistically significant difference.)

Cluster randomized trials: trials that randomize assignment of interventions at the level of natural groups or organizations rather than at the level of patients or other individuals. The clusters may be a set of clinics, hospitals, nursing homes, schools, communities, or geographic regions that are randomized to receive one or more interventions and comparators. Such designs are used when it is not feasible to randomize individuals or when an intervention is designed to be delivered at a group or social level, such as a workplace-based smoking cessation campaign. These are also known as group, place, or community randomized trials

Cohort study: an observational study in which outcomes in a group of patients that received an intervention are compared with outcomes in a similar group i.e., the cohort, either contemporary or historical, of patients that did not receive the intervention. In an adjusted- (or matched-) cohort study, investigators identify (or make statistical adjustments to provide) a cohort group that has characteristics (e.g., age, gender, disease severity) that are as similar as possible to the group that experienced the intervention.

Comparative effectiveness research (CER): generation and synthesis of evidence comparing the benefits and harms of technologies, with the attributes of: direct (“head-to-head”) comparisons, effectiveness in real-world health care settings, health care outcomes (as opposed to surrogate or other intermediate endpoints), and ability to identify different treatment effects in patient subgroups. CER can draw on a variety of complementary study designs and analytical methods.

Concealment of allocation: the process used to assign patients to alternative groups in an RCT in a manner that prevents foreknowledge (by the person managing the allocation as well as the patients) of this assignment. Medical record numbers, personal identification numbers, or birthdays are not adequate for concealment of allocation. Certain centralized randomization schemes and sequentially numbered sealed, opaque envelopes are among adequate methods of allocation concealment.

Concurrent nonrandomized control: a control group that is observed by investigators at the same time as the treatment group, but that was not established using random assignment of patients to control and treatment groups. Differences in the composition of the treatment and control groups may result.

Concurrent validity: refers to how well a measure correlates with a previously validated one, and the ability of a measure to accurately differentiate between different groups at the time the measure is applied.

Confidence interval: depicts the range of uncertainty about an estimate of a treatment effect. It is calculated from the observed differences in outcomes of the treatment and control groups and the sample size of a study. The confidence interval (CI) is the range of values above and below the point estimate that is likely to include the true value of the treatment effect. The use of CIs assumes that a study provides one sample of observations out of many possible samples that would be derived if the study were repeated many times. Investigators typically use CIs of 90%, 95%, or 99%. For instance, a 95% CI indicates that there is a 95% probability that the CI calculated from a particular study includes the true value of a treatment effect. If the interval includes a null treatment effect (usually 0.0, but 1.0 if the treatment effect is calculated as an odds ratio or relative risk), the null hypothesis of no true treatment effect cannot be rejected.

Conflict of interest: (or competing interest) refers to “any financial or other interest which conflicts with the service of the Individual because it (1) could significantly impair the individual's objectivity or (2) could create an unfair competitive advantage for any person or organization” (National Academies 2003). Conflict of interest means something more than individual bias; it involves an interest, usually financial, that could directly affect the conduct of HTA.

Confounding: occurs when any factor that is associated with an intervention has an impact on an outcome that is independent of the impact of the intervention. In controlled clinical trials, confounding refers to circumstances in which the observed treatment effect of an intervention is biased due to a difference between the intervention and control groups, such as a difference in baseline risk factors at the start of a trial or different exposures during the trial that could affect outcomes.

Consensus development: various forms of group judgment in which a group (or panel) of experts interacts in assessing an intervention and formulating findings by vote or other process of reaching general agreement. These processes may be informal or formal, involving such techniques as the nominal group and Delphi techniques.

Construct validity refers to how well a measure is correlated with other accepted measures of the construct (i.e., the concept or trait) of interest (e.g., pain, anxiety, mobility, quality of life), and discriminates between groups known to differ according to the variable.

Content validityrefers to the degree to which the set of items in a data collection instrument is known to represent the range or universe of meanings or dimensions of a construct of interest, e.g., how well the domains of a health-related quality of life index for arthritis represent the aspects of quality of life or daily functioning that are important to patients with arthritis.

Contraindication: a clinical symptom or circumstance indicating that the use of an otherwise advisable intervention would be inappropriate.

Control group: a group of patients that serves as the basis of comparison when assessing the effects of the intervention of interest that is given to the patients in the treatment group. Depending upon the circumstances of the trial, a control group may receive no treatment, a "usual" or "standard" treatment, or a placebo. To make the comparison valid, the composition of the control group should resemble that of the treatment group as closely as possible. (See also historical control and concurrent nonrandomized control.)

Controlled clinical trial: a prospective experiment in which investigators compare outcomes of a group of patients receiving an intervention to a group of similar patients not receiving the intervention. Not all clinical trials are RCTs, though all RCTs are clinical trials.

Controlled vocabulary: a system of terms, involving, e.g., definitions, hierarchical structure, and cross-references, that is used to index and retrieve a body of literature in a bibliographic, factual, or other database. An example is the MeSH controlled vocabulary used in the MEDLINE /PubMED database and other bibliographic databases of the US National Library of Medicine.

Convergent validity, opposite discriminant validity, refers to the extent to which two different measures that are intended to measure the same construct do indeed yield similar results. Convergent validity contributes to, or can be considered a subtype of, construct validity.

Cost-benefit analysis: a comparison of alternative interventions in which costs and outcomes are quantified in common monetary units.

Cost-consequence analysis: A form of cost-effectiveness analysis in which the components of incremental costs (of therapies, hospitalization, etc.) and consequences (health outcomes, adverse effects, etc.) of alternative interventions or programs are computed and displayed, without aggregating these results (e.g., into a cost-effectiveness ratio).

Cost-effectiveness analysis: a comparison of alternative interventions in which costs are measured in monetary units and outcomes are measured in non-monetary units, e.g., reduced mortality or morbidity.

Cost-minimization analysis: a determination of the least costly among alternative interventions that are assumed to produce equivalent outcomes.

Cost-utility analysis: a form of cost-effectiveness analysis of alternative interventions in which costs are measured in monetary units and outcomes are measured in terms of their utility, usually to the patient, e.g., using QALYs.

Cost-of-illness analysis: a determination of the economic impact of a disease or health condition, including treatment costs; this form of study does not address benefits/outcomes.

Coverage with evidence development: refers to limited coverage (e.g., for particular clinical indications or practice settings) for a new technology in parallel with specified data collection to provide stronger evidence about the effectiveness, safety, or other impacts of the technology, or additional evidence pertaining to broader uses of the technology, of interest to payers. This enables some patient access to the technology while reducing uncertainty through new evidence generation and real-world experience to inform revised coverage policies as well as clinical practice decisions. CED includes coverage “only in research” (i.e., coverage for a technology only when used in members of the payer’s patient population who are participating in clinical trials of the technology) and coverage “only with research” (i.e.,coverage for a technology only when also being used contemporaneously in a sample of the payer’s patient population participating in clinical trials of the technology). CED is a form of managed entry.

Criterion validity: how well a measure, including its various domains or dimensions, is correlated with a known gold standard or definitive measurement, if one exists.

Crossover bias: occurs when some patients who are assigned to the treatment group in a clinical study do not receive the intervention or receive another intervention, or when some patients in the control group receive the intervention (e.g., outside the trial). If these crossover patients are analyzed with their original groups, this type of bias can "dilute" (diminish) the observed treatment effect.

Crossover design: a clinical trial design in which patients receive, in sequence, the treatment (or the control), and then, after a specified time, switch to the control (or treatment). In this design, patients serve as their own controls, and randomization may be used to determine the order in which a patient receives the treatment and control.

Cross-sectional study: a (prospective or retrospective) observational study in which a group is chosen (sometimes as a random sample) from a certain larger population, and the exposures of people in the group to an intervention and outcomes of interest are determined.

Decision analysis: an approach to decision making under conditions of uncertainty that involves modeling of the sequences or pathways of multiple possible strategies (e.g., of diagnosis and treatment for a particular clinical problem) to determine which is optimal. It is based upon available estimates (drawn from the literature or from experts) of the probabilities that certain events and outcomes will occur and the values of the outcomes that would result from each strategy. A decision tree is a graphical representation of the alternate pathways.

Delphi technique: an iterative group judgment technique in which a central source forwards surveys or questionnaires to isolated, anonymous (to each other) participants whose responses are collated/summarized and recirculated to the participants in multiple rounds for further modification/critique, producing a final group response (sometimes statistical).

Detection bias: (or ascertainment or observer bias) refers to bias arising from differential assessment of outcomes, whether by patients or investigators, influenced by their knowledge of the assignment of patients to intervention or control groups. Blinding of patients and investigators to treatment assignment is a technique used to manage detection bias. Prospective studies help to manage certain forms of detection bias that arise in retrospective studies.

Direct costs: the fixed and variable costs of all resources (goods, services, etc.) consumed in the provision of an intervention as well as any consequences of the intervention such as adverse effects or goods or services induced by the intervention. Includes direct medical costs and direct nonmedical costs such as transportation and child care.

Disability-adjusted life years (DALYs): a unit of health care status that adjusts age-specific life expectancy by the loss of health and years of life due to disability from disease or injury. DALYs are often used to measure the global burden of disease.

Discounting: the process used in cost analyses to reduce mathematically future costs and/or benefits/outcomes to their present value. These adjustments reflect that given levels of costs and benefits occurring in the future usually have less value in the present than the same levels of costs and benefits realized in the present.

Discount rate: the interest rate used to discount or calculate future costs and benefits so as to arrive at their present values, e.g., 3% or 5%. This is also known as the opportunity cost of capital investment. Discount rates are usually based on government bonds or market interest rates for cost of capital whose maturity is about same as the time period during which the intervention or program being evaluated. For example, the discount rate used by the US federal government is based on the Treasury Department cost of borrowing funds and will vary, depending on the period of analysis.

Discriminant validity: opposite convergent validity, concerns whether different measures that are intended to measure different constructs do indeed fail to be positively associated with each other. Discriminant validity contributes to, or can be considered a subtype of, construct validity.

Disease management: a systematic process of managing care of patients with specific diseases or conditions (particularly chronic conditions) across the spectrum of outpatient, inpatient, and ancillary services. The purposes of disease management may include: reduce acute episodes, reduce hospitalizations, reduce variations in care, improve health outcomes, and reduce costs. Disease management may involve continuous quality improvement or other management paradigms. It may involve a cyclical process of following practice protocols, measuring the resulting outcomes, feeding those results back to clinicians, and revising protocols as appropriate.

Disinvestment: refers to completely or partially withdrawing resources from currently used health technologies that are potentially harmful, ineffective, or cost-ineffective. It is a means of optimizing the use of health care resources. Disinvestment does not imply replacement by alternatives, though it may be accompanied by, or provide “head-room” or a niche for, a new technology or other replacement. Active disinvestment refers to purposely withdrawing resources from or otherwise discontinuing use of a technology. Implicit disinvestment refers to instances in which a technology falls from use or is superseded by another in the absence of an explicit decision to discontinue its use.

Disruptive innovation: an innovation that alters and may even displace existing systems, networks, or markets, and that may create new business models and lead to emergence of new markets. In health care, disruptive innovations challenge and may alter existing systems of regulation, payment, health care delivery, or professional training.

Dissemination: any process by which information is transmitted (made available or accessible) to intended audiences or target groups.

Drug compendium: a comprehensive listing or index of summary information about drugs and biologicals (or a subset of these, e.g., anticancer treatments), including their dosing, adverse effects, interactions, contraindications, and recommended indications, including those that are approved by regulatory agencies (“on-label”) and those that are beyond regulatory agency approval yet may be “medically accepted” (“off-label”) and other pharmacologic and pharmacokinetic information.

Effect size: same as treatment effect. Also, a dimensionless measure of treatment effect that is typically used for continuous variables and is usually defined as the difference in mean outcomes of the treatment and control group divided by the standard deviation of the outcomes of the control group. One type of meta-analysis involves averaging the effect sizes from multiple studies.

Effectiveness: the benefit (e.g., to health outcomes) of using a technology for a particular problem under general or routine conditions, for example, by a physician in a community hospital or by a patient at home.

Effectiveness research: see outcomes research.

Efficacy: the benefit of using a technology for a particular problem under ideal conditions, for example, in a laboratory setting, within the protocol of a carefully managed randomized controlled trial, or at a “center of excellence.”

Endpoint: a measure or indicator chosen for determining an effect of an intervention.

Enrichment of trials: techniques of identifying patients for enrollment in clinical trials based on prospective use of patient attributes that are intended to increase the likelihood of detecting a treatment effect (if one truly exists) compared to an unselected population. Such techniques may be designed, e.g., to decrease patient heterogeneity of response, select for patients more likely to experience a disease-related trial endpoint, or select for patients (based on a known predictive biomarker) more likely to respond to a treatment (intended to result in a larger effect size). In adaptive enrichment of clinical trials, investigators seek to discern predictive markers/attributes during the course of a trial and apply these to enrich subsequent patient enrollment in the trial.

Equipoise: a state of uncertainty regarding whether alternative health care interventions will confer more favorable outcomes, including balance of benefits and harms. Under the principle of equipoise, a patient should be enrolled in an RCT only if there is genuine uncertainty (an expectation for equal likelihood) about which intervention will benefit and which will harm the patient most; and, across a large number of RCTs, the number of RCTs that reject and that fail to reject the null hypothesis will be approximately equal. The assumption of equipoise is the basis for testing the null hypothesis in RCTs.

Evidence-based medicine: the use of current best evidence from scientific and medical research to make decisions about the care of individual patients. It involves formulating questions relevant to the care of particular patients, searching the scientific and medical literature, identifying and evaluating relevant research results, and applying the findings to patients.

Evidence table: a summary display of selected characteristics (e.g., of methodological design, patients, outcomes) of studies of a particular intervention or health problem.

Exclusions after randomization bias: refers to bias arising from inappropriate accounting for patient dropouts, withdrawals, and deviations from trial protocols. Patients who leave a trial or whose data are not otherwise adequately collected as per the trial protocol may differ systematically from the remaining patients, introducing potential biases in observed treatment effects. Intention-to-treat analysis and worst-case scenario analysis are two techniques for managing bias due to exclusions after randomization.

External validity: the extent to which the results of a study conducted under particular circumstances can be generalized to other patients, populations, or other circumstances. To the extent that the circumstances of a particular study (e.g., patient characteristics or the manner of delivering a treatment) differ from the circumstances of interest, the external validity of the results of that study may be questioned. Also known as applicability.

Face validity is the ability of a measure to represent reasonably (that is, to be acceptable “on its face”) a construct (i.e., a concept, trait, or domain of interest) as judged by someone with expertise in the construct.

Factual database: an indexed computer or printed source that provides reference or authoritative information, e.g., in the form of guidelines for diagnosis and treatment, patient indications, or adverse effects.

False negative error: occurs when the statistical analysis of a trial detects no difference in outcomes between a treatment group and a control group when in fact a true difference exists. This is also known as a Type II error. The probability of making a Type II error is known as β (beta).

False positive error: occurs when the statistical analysis of a trial detects a difference in outcomes between a treatment group and a control group when in fact there is no difference. This is also known as a Type I error. The probability of a Type I error is known as α (alpha).

Follow-up: the ability of investigators to observe and collect data on all patients who were enrolled in a trial for its full duration. To the extent that data on patient events relevant to the trial are lost, e.g., among patients who move away or otherwise withdraw from the trial, the results may be affected, especially if there are systematic reasons why certain types of patients withdraw. Investigators should report on the number and type of patients who could not be evaluated, so that the possibility of bias may be considered.

Funnel plot: in systematic reviews and meta-analyses, a graph (scatter plot) of the distribution of reported treatment effects of individual studies (along the horizontal axis) against the sample sizes of the studies (along the vertical axis). Because studies with larger sample sizes should generate more precise estimates of treatment effect, they are likely to be grouped more narrowly around an average along the horizontal axis; while the studies with smaller sample sizes are likely to be scattered more widely on both sides of the average along the horizontal axis. As such, in the absence of bias (e.g., publication bias), the scatter plot will be narrower at the top (large sample sizes, small variation) and wider at the bottom (small sample sizes, large variation), resembling an inverted funnel.

Genomics: the branch of molecular genetics that studies the genome, i.e., the complete set of DNA in the chromosomes of an organism. This may involve application of DNA sequencing, recombinant DNA, and related bioinformatics to sequence, assemble, and analyze the structure, function, and evolution of genomes. Whereas genetics is the study of the function and composition of individual genes, genomics addresses all genes and their interrelationships in order to understand their combined influence on the organism. (See also pharmacogenetics and pharmacogenomics.)

Gray literature: research reports that are not found in traditional peer-reviewed publications, for example: government agency monographs, symposium proceedings, and unpublished company reports.

Health-related quality of life (HRQL) measures: patient outcome measures that extend beyond traditional measures of mortality and morbidity, to include such dimensions as physiology, function, social activity, cognition, emotion, sleep and rest, energy and vitality, health perception, and general life satisfaction. (Some of these are also known as health status, functional status, or quality of life measures.)

Health technology assessment (HTA): the systematic evaluation of properties, effects, and/or impacts of health care technology. It may address the direct, intended consequences of technologies as well as their indirect, unintended consequences. Its main purpose is to inform technology-related policymaking in health care. HTA is conducted by interdisciplinary groups using explicit analytical frameworks drawing from a variety of methods.

Health services research: a field of inquiry that examines the impact of the organization, financing and management of health care services on the delivery, quality, cost, access to and outcomes of such services.

Healthy-years equivalents (HYEs): the number of years of perfect health that are considered equivalent to (i.e., have the same utility as) the remaining years of life in their respective health states.

Heterogeneity of treatment effects (HTEs): refers to variation in effectiveness, safety (adverse events), or other patient responses observed across a patient population with a particular health problem or condition. This variation may be associated with such patient characteristics as genetic, sociodemographic, clinical, behavioral, environmental, and other personal traits, or personal preferences.

Historical control: a control group that is chosen from a group of patients who were observed at some previous time. The use of historical controls raises concerns about valid comparisons because they are likely to differ from the current treatment group in their composition, diagnosis, disease severity, determination of outcomes, and/or other important ways that would confound the treatment effect. It may be feasible to use historical controls in special instances where the outcomes of a standard treatment (or no treatment) are well known and vary little for a given patient population.

Horizon scanning: refers to the ongoing tracking of multiple, diverse information sources (bibliographic databases, clinical trial registries, regulatory approvals, market research reports, etc.) to identify potential topics for HTA and provide input for setting priorities. While horizon scanning is most often used to identify new technologies that eventually may merit assessment, it can also involve identifying technologies that may be outmoded or superseded by newer ones. It can also be used to, e.g., identify areas of technological change; anticipate new indications of technologies; identify variations in, and potential inappropriate use of, technologies; and plan data collection to monitor adoption, diffusion, use, and impacts of technologies.

Hypothesis testing: a means of interpreting the results of a clinical trial that involves determining the probability that an observed treatment effect could have occurred due to chance alone if a specified hypothesis were true. The specified hypothesis is normally a null hypothesis, made prior to the trial, that the intervention of interest has no true effect. Hypothesis testing is used to determine if the null hypothesis can or cannot be rejected.

Incidence: the rate of occurrence of new cases of a disease or condition in a population at risk during a given period of time, usually one year. (Contrast with prevalence.)

Indication: a clinical symptom or circumstance indicating that the use of a particular intervention would be appropriate.

Indirect costs: the cost of time lost from work and decreased productivity due to disease, disability, or death. (In cost accounting, it refers to the overhead or fixed costs of producing goods or services.)

Intangible costs: the cost of pain and suffering resulting from a disease, condition, or intervention.

Integrative methods: (or secondary or synthesis methods) involve combining data or information from multiple existing primary studies such as clinical trials. These include a range of more or less systematic quantitative and qualitative methods, including systematic literature reviews, meta-analysis, decision analysis, consensus development, and unstructured literature reviews. (Contrast with primary data methods.)

Intention to treat analysis: a type of analysis of clinical trial data in which all patients are included in the analysis based on their original assignment to intervention or control groups, regardless of whether patients failed to fully participate in the trial for any reason, including whether they actually received their allocated treatment, dropped out of the trial, or crossed over to another group.

Intermediate endpoint: a non-ultimate endpoint (e.g., not mortality or morbidity) that may be associated with disease status or progression toward an ultimate endpoint such as mortality or morbidity. They may be certain biomarkers (e.g., HbA1c in prediabetes or diabetes, bone density in osteoporosis, tumor progression in cancer) or disease symptoms (e.g., angina frequency in heart disease, measures of lung function in chronic obstructive pulmonary disease). (See also biomarker and surrogate endpoint.)

Internal validity: the extent to which the results of a study accurately represent the causal relationship between an intervention and an outcome in the particular circumstances of that study. This includes the extent to which the design and conduct of a study minimize the risk of any systematic (non-random) error (i.e., bias) in the study results. True experiments such as RCTs generally have high internal validity.

Interventional study: a prospective study in which investigators assign or manage an intervention or other exposure of interest to patients (including RCTs, other experiments, and certain other study designs) and interpret the outcomes. In an interventional study, investigators manage assignment of patients to interventions (e.g., treatment and control groups), timing of interventions, selection of outcomes, and timing of data collection. (Contrast with observational study.)

Investigational Device Exemption (IDE): a regulatory category and process in which the US Food and Drug Administration (FDA) allows specified use of an unapproved health device in controlled settings for purposes of collecting data on safety and efficacy/effectiveness; this information may be used subsequently in a premarketing approval application.

Investigational New Drug Application (IND): an application submitted by a sponsor to the US FDA prior to human testing of an unapproved drug or of a previously approved drug for an unapproved use.

Language bias: a form of bias that may affect the findings of a systematic review or other literature synthesis that arises when research reports are not identified or are excluded based on the language in which they are published.

Large simple trials: prospective, randomized controlled trials that use large numbers of patients, broad patient inclusion criteria, multiple study sites, minimal data requirements, and electronic registries. Their purposes include detecting small and moderate treatment effects, gaining effectiveness data, and improving external validity.

Literature review: a summary and interpretation of research findings reported in the literature. May include unstructured qualitative reviews by single authors as well as various systematic and quantitative procedures such as meta-analysis. (Also known as overview.)

Managed entry: refers to a range of innovative payment approaches that provide patient access under certain conditions. Three main purposes are to manage: uncertainty about safety, effectiveness, or cost effectiveness; budget impact; or technology use for optimizing performance. Two main types of managed entry are conditional coverage (including coverage with evidence development) and performance-linked reimbursement.

Marginal benefit: the additional benefit (e.g., in units of health outcome) produced by an additional resource use (e.g., another health care intervention).

Marginal cost: the additional cost required to produce an additional unit of benefit (e.g., unit of health outcome).

Markov model: a type of quantitative modeling that involves a specified set of mutually exclusive and exhaustive states (e.g., of a given health status), and for which there are transition probabilities of moving from one state to another (including of remaining in the same state). Typically, states have a uniform time period, and transition probabilities remain constant over time.

Meta-analysis: systematic methods that use statistical techniques for combining results from different studies to obtain a quantitative estimate of the overall effect of a particular intervention or variable on a defined outcome. This combination may produce a stronger conclusion than can be provided by any individual study. (Also known as data synthesis or quantitative overview.)

Meta-regression: in meta-analysis, techniques for relating the magnitude of an effect (e.g., change in a health outcome) to one or more characteristics of the primary studies used, such as patient characteristics, drug dose, duration of study, and year of publication.

Monte Carlo simulation: a technique used in computer simulations that uses sampling from a random number sequence to simulate characteristics or events or outcomes with multiple possible values. For example, this can be used to represent or model many individual patients in a population with ranges of values for certain health characteristics or outcomes. In some cases, the random components are added to the values of a known input variable for the purpose of determining the effects of fluctuations of this variable on the values of the output variable.

Moving target problem: changes in health care that can render the findings of HTAs out of date, sometimes before their results can be implemented. Included are changes in the focal technology, changes in the alternative or complementary technologies i.e., that are used for managing a given health problem, emergence of new competing technologies, and changes in the application of the technology (e.g., to different patient populations or to different health problems).

Multi-criteria decision analysis (MCDA): a transparent and objective method for decomposing a decision problem into a set of attributes or other criteria, including those that may be conflicting. It identifies and compares the attributes of alternatives (e.g., therapeutic options) from the perspectives of multiple stakeholders, and evaluates these alternatives by ranking, rating, or pairwise comparisons, using such stakeholder elicitation techniques as conjoint analysis and analytic hierarchy process.

Multiplicity: (or multiple comparisons) refers to errors in data interpretation that may arise from conducting multiple statistical analyses of the same data set. Such iterative analyses increase the probability of false positive (Type I) error, i.e., concluding incorrectly that an intervention is effective when the finding of a statistically significant treatment effect is due to random error. Types of multiplicity include analyses of numerous endpoints, stopping rules for RCTs that involve “multiple looks” at the data emerging from the same trial, and analyses of numerous subgroups.

N-of-1 trial: a clinical trial in which a single patient is the total population for the trial and in which a sequence of investigational and control interventions are allocated to the patient (i.e., a multiple crossover study conducted in a single patient). A trial in which random allocation is used to determine the sequence of interventions is given to a patient is an N-of-1 RCT. N-of-1 trials are used to determine treatment effects in individuals, and sets of these trials can be used to estimate heterogeneity of treatment effects across a population.

Negative predictive value: an operating characteristic of a diagnostic test; negative predictive value is the proportion of persons with a negative test who truly do not have the disease, determined as: [true negatives ¸ (true negatives + false negatives)]. It varies with the prevalence of the disease in the population of interest. (Contrast withpositive predictive value.)

New Drug Application (NDA): an application submitted by a sponsor to the FDA for approval to market a new drug (a new, nonbiological molecular entity) for human use in US interstate commerce.

Nonrandomized controlled trial: a controlled clinical trial that assigns patients to intervention and control groups using a method that does not involve randomization, e.g., at the convenience of the investigators or some other technique such as alternate assignment.

Nominal group technique: a face-to-face group judgment technique in which participants generate silently, in writing, responses to a given question/problem; responses are collected and posted, but not identified by author, for all to see; responses are openly clarified, often in a round-robin format; further iterations may follow; and a final set of responses is established by voting/ranking.

Null hypothesis: in hypothesis testing, the hypothesis that an intervention has no effect, i.e., that there is no true difference in outcomes between a treatment group and a control group. Typically, if statistical tests indicate that the P value is at or above the specified a-level (e.g., 0.01 or 0.05), then any observed treatment effect is considered to be not statistically significant, and the null hypothesis cannot be rejected. If the P value is less than the specified a-level, then the treatment effect is considered to be statistically significant, and the null hypothesis is rejected. If a confidence interval (e.g., of 95% or 99%) includes a net zero treatment effect (or a risk ratio of 1.0), then the null hypothesis cannot be rejected. The assumption of equipoise is the basis for testing the null hypothesis in RCTs.

Number needed to treat: a measure of treatment effect that provides the number of patients who need to be treated to prevent one outcome event. It is the inverse ofabsolute risk reduction, (1 ¸ absolute risk reduction); i.e., 1.0 ¸ (Pc- Pt). For instance, if the results of a trial were that the probability of death in a control group was 25% and the probability of death in a treatment group was 10%, the number needed to treat would be 1.0 ¸ (0.25 - 0.10) = 6.7 patients. (See also absolute risk reduction, relative risk reduction, and odds ratio.)

Observational study: a study in which the investigators do not intervene, but simply observe the course of events over time. That is, investigators do not manipulate the use of, or deliver, an intervention or exposure (e.g., do not assign patients to treatment and control groups), but only observe patients who are (and sometimes patients who are not, as a basis of comparison) receive the intervention or exposure, and interpret the outcomes. These studies are more subject to selection bias than experimental studies such as randomized controlled trials. (Contrast with interventional study.)

Odds ratio: a measure of treatment effect that compares the probability of a type of outcome in the treatment group with the outcome of a control group, i.e., [Pt¸ (1 - Pt)] ¸ [Pc¸ (1 - Pc)]. For instance, if the results of a trial were that the probability of death in a control group was 25% and the probability of death in a treatment group was 10%, the odds ratio of survival would be [0.10 ¸ (1.0 - 0.10)] ¸ [(0.25 ¸ (1.0 - 0.25)] = 0.33. (See also absolute risk reduction, number needed to treat, and relative risk reduction.)

Outcomes research: evaluates the impact of health care on the health outcomes of patients and populations. It may also include evaluation of economic impacts linked to health outcomes, such as cost effectiveness and cost utility. Outcomes research emphasizes health problem- (or disease-) oriented evaluations of care delivered in general, real-world settings; multidisciplinary teams; and a wide range of outcomes, including mortality, morbidity, functional status, mental well-being, and other aspects of health-related quality of life. It may entail any in a range of primary data collection methods and synthesis methods that combine data from primary studies.

P value: in hypothesis testing, the probability that an observed difference between the intervention and control groups is due to chance alone if the null hypothesis is true. If P is less than the α-level (typically 0.01 or 0.05) chosen prior to the study, then the null hypothesis is rejected.

Parallel group (or independent group) trial: a trial that compares two contemporaneous groups of patients, one of which receives the treatment of interest and one of which is a control group (e.g., a randomized controlled trial). (Some parallel trials have more than one treatment group; others compare two treatment groups that act as a control for the other.)

Patient-centered outcomes (or patient-oriented outcomes): refers to health outcomes that patients experience across the variety of real-world settings, including: survival, functional status, quality of life, quality of death, symptoms, pain, nausea, psychosocial well-being, health utility (patient-perceived value of particular states of health), and patient satisfaction. (Excluded are outcomes that patients do not directly experience, such as blood pressure, lipid levels, bone density, viral load, or cardiac output.) Patient-centered outcomes can be assessed at a generic level or a disease/condition-specific level.

Patient-centered outcomes research (PCOR): generates evidence comparing the impact of health care on patient-centered outcomes. PCOR can draw on a wide variety of primary and secondary methods, including, e.g., practical or pragmatic RCTs, cluster randomized trials, and other trial designs, registries, insurance claims data, systematic reviews, and others.

Patient preference trials: trials designed to account for patient preferences, including evaluating the impact of preference on health outcomes. These trials have various designs. In some, the patients with a strong preference, e.g., for a new treatment or usual care, are assigned to a parallel group receiving their preferred intervention. The patients who are indifferent to receiving the new treatment or usual care are randomized into one group or another. Outcomes for the parallel, non-randomized groups are analyzed apart from the outcomes for the randomized groups. In other designs, patient preferences are recorded prior to the RCT, but all patients are randomized regardless of their stated preference, and subgroup analyses are conducted to determine the impact of preferences on outcomes.

Patient-reported outcomes: are those patient-centered outcomes that are self-reported by patients or obtained from patients (or reported on their behalf by their caregivers or surrogates) by an interviewer without interpretation or modification of the patient’s response by other people, including clinicians.

Patient selection bias: a bias that occurs when patients assigned to the treatment group differ from patients assigned to the control group in ways that can affect outcomes, e.g., age or disease severity. If the two groups are constituted differently, it is difficult to attribute observed differences in their outcomes to the intervention alone. Random assignment of patients to the treatment and control groups minimizes opportunities for this bias.

Peer review: the process by which manuscripts submitted to health, biomedical, and other scientifically oriented journals and other publications are evaluated by experts in appropriate fields (usually anonymous to the authors) to determine if the manuscripts are of adequate quality for publication.

Performance bias refers to systematic differences between comparison groups in the care that is provided, or in exposure to factors other than the interventions of interest. This includes, e.g., deviating from the study protocol or assigned treatment regimens so that patients in control groups receive the intervention of interest, providing additional or co-interventions unevenly to the intervention and control groups, and inadequately blinding providers and patients to assignment to intervention and control groups, thereby potentially affecting whether or how assigned interventions or exposures are delivered. Techniques for diminishing performance bias include blinding of patients and providers (in RCTs and other controlled trials in particular), adhering to the study protocol, and sustaining patients’ group assignments.

Personalized medicine: the tailoring of health care (including prevention, diagnosis, therapy) to the particular traits (or circumstances or other characteristics) of a patient that influence response to a heath care intervention. These may include genomic, epigenomic, microbiomic, sociodemographic, clinical, behavioral, environmental, and other personal traits, as well as personal preferences. Personalized medicine generally care does not refer to the creation of interventions that are unique to a patient, but the ability to classify patients into subpopulations that differ in their responses to particular interventions. (Also known as personalized health care.) The closely related term, precision medicine, is used synonymously, though it tends to emphasize the use of patient molecular traits to tailor therapy.

Pharmacogenetics: is the study of single gene interactions with drugs, including on metabolic variations that influence efficacy and toxicity. (See also genomics and pharmacogenomics.)

Pharmacogenomics: is the application of pharmacogenetics across the entire genome. (See also genomics .)

PICOTS: formulation of an evidence question based on: Population (e.g., condition, disease severity/stage, comorbidities, risk factors, demographics), Intervention (e.g., technology type, regimen/dosage/frequency, technique/method of administration), Comparator (e.g., placebo, usual/standard care, active control), Outcomes (e.g., morbidity, mortality, quality of life, adverse events), Timing (e.g., duration/intervals of follow-up), and Setting (e.g., primary, inpatient, specialty, home care).

Phase I, II, III, and IV studies: phases of clinical trials of new technologies (usually drugs) in the development and approval process required by the FDA (or other regulatory agencies). Phase I trials typically involve approximately 20-80 healthy volunteers to determine a drug's safety, safe dosage range, absorption, metabolic activity, excretion, and the duration of activity. Phase II trials are controlled trials in approximately 100-300 volunteer patients (with disease) to determine the drug's efficacy and adverse reactions (sometimes divided into Phase IIa pilot trials and Phase IIb well-controlled trials). Phase III trials are larger controlled trials in approximately 1,000-3,000 patients to verify efficacy and monitor adverse reactions during longer-term use (sometimes divided into Phase IIIa trials conducted before regulatory submission and Phase IIIb trials conducted after regulatory submission but before approval). Phase IV trials are postmarketing studies to monitor long-term effects and provide additional information on safety and efficacy, including for different regimens patient groups.

Placebo: an inactive substance or treatment given to satisfy a patient's expectation for treatment. In some controlled trials (particularly of drug treatments) placebos that are made to be indistinguishable by patients (and providers when possible) from the true intervention are given to the control group to be used as a comparative basis for determining the effect of the investigational treatment.

Placebo effect: the effect on patient outcomes (improved or worsened) that may occur due to the expectation by a patient (or provider) that a particular intervention will have an effect. The placebo effect is independent of the true effect (pharmacological, surgical, etc.) of a particular intervention. To control for this, the control group in a trial may receive a placebo.

Positive predictive value: an operating characteristic of a diagnostic test; positive predictive value is the proportion of persons with a positive test who truly have the disease, determined as: [true positives ¸ (true positives + false positives)]. It varies with the prevalence of the disease in the population of interest. (Contrast with negative predictive value.)

Power: the probability of detecting a treatment effect of a given magnitude when a treatment effect of at least that magnitude truly exists. For a true treatment effect of a given magnitude, power is the probability of avoiding Type II error, and is generally defined as (1 - β).

Pragmatic (or practical) clinical trials (PCTs): are trials whose main attributes include: comparison of clinically relevant alternative interventions, a diverse population of study participants, participants recruited from heterogeneous practice settings, and data collection on a broad range of health outcomes. Some large simple trials are also PCTs.

Precision: the degree to which a measurement (e.g., the mean estimate of a treatment effect) is derived from a set of observations having small variation (i.e., close in magnitude to each other); also, the extent to which the mean estimate of a treatment effect is free from random error. A narrow confidence interval indicates a more precise estimate of effect than a wide confidence interval. A precise estimate is not necessarily an accurate one. (Contrast with accuracy.)

Precision medicine: the tailoring of health care (particularly diagnosis and treatment using drugs and biologics) to the particular traits of a patient that influence response to a heath care intervention. Though it is sometimes used synonymously with personalized medicine, precision medicine tends to emphasize the use of patient molecular traits to tailor therapy.

Predictive validity refers to the ability to use differences in a measure of a construct to predict future events or outcomes. It may be considered a subtype of criterion validity.

Predictive value negative: see negative predictive value.

Predictive value positive: see positive predictive value.

Premarketing Approval (PMA) Application: an application made by the sponsor of a health device to the FDA for approval to market the device in US interstate commerce. The application includes information documenting the safety and efficacy/effectiveness of the device.

Prevalence: the number of people in a population with a specific disease or condition at a given time, usually expressed as a ratio of the number of affected people to the total population. (Contrast with incidence.)

Primary data methods involve collection of original data, including from randomized controlled trials, observational studies, case series, etc. (Contrast with integrative methods.)

Probability distribution: portrays the relative likelihood that a range of values is the true value of a treatment effect. This distribution often appears in the form of a bell-shaped curve. An estimate of the most likely true value of the treatment effect is the value at the highest point of the distribution. The area under the curve between any two points along the range gives the probability that the true value of the treatment effect lies between those two points. Thus, a probability distribution can be used to determine an interval that has a designated probability (e.g., 95%) of including the true value of the treatment effect.

Prospective study: a study in which the investigators plan and manage the intervention of interest in selected groups of patients. As such, investigators do not know what the outcomes will be when they undertake the study. (Contrast with retrospective study.)

Publication bias: unrepresentative publication of research reports that is not due to the quality of the research but to other characteristics, e.g., tendencies of investigators and sponsors to submit, and publishers to accept, “positive” research reports, e.g., ones that detect beneficial treatment effects of a new intervention. Prospective registration of clinical trials and efforts to ensure publication of “negative” trials are two methods used to manage publication bias. Contrast with reporting bias.

Quality-adjusted life year (QALY): a unit of health care outcomes that adjusts gains (or losses) in years of life subsequent to a health care intervention by the quality of life during those years. QALYs can provide a common unit for comparing cost-utility across different interventions and health problems. Analogous units include disability-adjusted life years (DALYs) and healthy-years equivalents (HYEs).

Quality assessment: a measurement and monitoring function of quality assurance for determining how well health care is delivered in comparison with applicable standards or acceptable bounds of care.

Quality assurance: activities intended to ensure that the best available knowledge concerning the use of health care to improve health outcomes is properly implemented. This involves the implementation of health care standards, including quality assessment and activities to correct, reduce variations in, or otherwise improve health care practices relative to these standards.

Quality of care: the degree to which health care is expected to increase the likelihood of desired health outcomes and is consistent with standards of health care. (See also quality assessment and quality assurance.)

Random error (or random variation)the tendency for the estimated magnitude of a parameter (e.g., based on the average of a sample of observations of a treatment effect) to deviate randomly from the true magnitude of that parameter. Random error is due to chance alone; it is independent of the effects of systematic biases. In general, the larger the sample size is, the lower the random error is of the estimate of a parameter. As random error decreases, precision increases.

Randomization: a technique of assigning patients to treatment and control groups that is based only on chance distribution. It is used to diminish patient selection bias in clinical trials. Proper randomization of patients is an indifferent yet objective technique that tends to neutralize patient prognostic factors by spreading them evenly among treatment and control groups. Randomized assignment is often based on computer-generated tables of random numbers. (See selection bias.)

Randomized controlled trial (RCT): an experiment (and therefore a prospective study) in which investigators randomly assign an eligible sample of patients to one or more treatment groups and a control group and follow patients' outcomes. (Also known as randomized clinical trial.)

Randomized-withdrawal trial: a form of “enriched” clinical trial design in which patients who respond favorably to an investigational intervention are then randomized to continue receiving that intervention or placebo. The study endpoints are return of symptoms or the ability to continue participation in the trial. The patients receiving the investigational intervention continue to do so only if they respond favorably, while those receiving placebo continue to do only until their symptoms return. This trial design is intended to minimize the time that patients receive placebo.

Rapid HTA: a more focused and limited version of HTA that is typically performed in approximately 4-8 weeks. Rapid HTAs are done in response to requests from decision makers who seek information support for near-term decisions. They offer a tradeoff between providing less-than-comprehensive and more uncertain information in time to act on a decision versus comprehensive and more certain information when the opportunity to make an effective decision may have passed. Rapid HTAs may involve some or all of: fewer types of impacts assessed or evidence questions, searching fewer bibliographic databases, relying on fewer types of studies (e.g., only systematic reviews or only RCTs), use of shorter and more qualitative syntheses with categorization of results without meta-analyses, and more limited or conditional interpretation of findings or recommendations.

Recall bias: refers to under-reporting, over-reporting, or other misreporting of events or other outcomes by patients or investigators who are asked to report these after their occurrence.

Receiver operating characteristic (ROC) curve: a graphical depiction of the relationship between the true positive ratio (sensitivity) and false positive ratio (1 - specificity) as a function of the cutoff level of a disease (or condition) marker. ROC curves help to demonstrate how raising or lowering the cutoff point for defining a positive test result affects tradeoffs between correctly identifying people with a disease (true positives) and incorrectly labeling a person as positive who does not have the condition (false positives).

Registries: any of a wide variety of repositories (usually electronic) of observations and related information about a group of patients (e.g., adult males living in a particular region), a disease (e.g., hypertension), an intervention (e.g., device implant), biological samples (e.g., tumor tissue), or other events or characteristics. Depending on criteria for inclusion in the database, the observations may have controls. As sources of observational data, registries can be useful for understanding real-world patient experience, including to complement safety and efficacy evidence from RCTs and other clinical trials. Registries can be used to determine the incidence of adverse events and to identify and follow-up with registered people at risk for adverse events. For determining causal relationships between interventions and outcomes, registries are limited by certain confounding factors (e.g., no randomization and possible selection bias in the process by which patients or events are recorded).

Reliability: the extent to which an observation that is repeated in the same, stable population yields the same result (i.e., test-retest reliability). Also, the ability of a single observation to distinguish consistently among individuals in a population.

Relative risk reduction: a type of measure of treatment effect that compares the probability of a type of outcome in the treatment group with that of a control group, i.e.: (Pc- Pt) ¸ Pc. For instance, if the results of a trial show that the probability of death in a control group was 25% and the probability of death in a control group was 10%, the relative risk reduction would be: (0.25 - 0.10) ¸ 0.25 = 0.6. (See also absolute risk reduction, number needed to treat, and odds ratio.)

Reporting bias: refers to systematic differences between reported and unreported findings, including, e.g., differential reporting of outcomes between comparison groups and incomplete reporting of study findings. Techniques for diminishing reporting bias include thorough reporting of outcomes consistent with outcome measures specified in the study protocol, attention to documentation and rationale for any post-hoc analyses not specified prior to the study, and reporting of literature search protocols and results for review articles. Differs from publication bias, which concerns the extent to which all relevant studies on given topic proceed to publication.

Retrospective study: a study in which investigators select groups of patients that have already been treated and analyze data from the events experienced by these patients. These studies are subject to bias because investigators can select patient groups with known outcomes. (Contrast with prospective study.)

Safety: a judgment of the acceptability of risk (a measure of the probability of an adverse outcome and its severity) associated with using a technology in a given situation, e.g., for a patient with a particular health problem, by a clinician with certain training, or in a specified treatment setting.

Sample size: the number of patients studied in a trial, including the treatment and control groups, where applicable. In general, a larger sample size decreases the probability of making a false-positive error (α) and increases the power of a trial, i.e., decreases the probability of making a false-negative error (β). Large sample sizes decrease the effect of random error on the estimate of a treatment effect.

Selection bias: refers to systematic distortions in assigning patients to intervention and control groups. This bias can result in baseline differences between the groups that could affect their prognoses and bias their treatment outcomes. In clinical trials, allocation concealment and randomization of treatment assignment are techniques for managing selection bias.

Sensitivity: an operating characteristic of a diagnostic test that measures the ability of a test to detect a disease (or condition) when it is truly present. Sensitivity is the proportion of all diseased patients for whom there is a positive test, determined as: [true positives ¸ (true positives + false negatives)]. (Contrast with specificity.)

Sensitivity analysis: a means to determine the robustness of a mathematical model or analysis (such as a cost-effectiveness analysis or decision analysis) that tests a plausible range of estimates of key independent variables (e.g., costs, outcomes, probabilities of events) to determine if such variations make meaningful changes the results of the analysis. Sensitivity analysis also can be performed for other types of study; e.g., clinical trials analysis (to see if inclusion/exclusion of certain data changes results) and meta-analysis (to see if inclusion/exclusion of certain studies changes results).

Series: an uncontrolled study (prospective or retrospective) of a series (succession) of consecutive patients who receive a particular intervention and are followed to observe their outcomes. (Also known as case series or clinical series or series of consecutive cases.)

Specificity: an operating characteristic of a diagnostic test that measures the ability of a test to exclude the presence of a disease (or condition) when it is truly not present. Specificity is the proportion of non-diseased patients for whom there is a negative test, expressed as: [true negatives ¸ (true negatives + false positives)]. (Contrast with sensitivity.)

Statistical power: see power.

Statistical significance: a conclusion that an intervention has a true effect, based upon observed differences in outcomes between the treatment and control groups that are sufficiently large so that these differences are unlikely to have occurred due to chance, as determined by a statistical test. Statistical significance indicates the probability that the observed difference was due to chance if the null hypothesis is true; it does not provide information about the magnitude of a treatment effect. (Statistical significance is necessary but not sufficient for demonstrating clinical significance.)

Statistical test: a mathematical formula (or function) that is used to determine if the difference in outcomes between a treatment and control group are great enough to conclude that the difference is statistically significant. Statistical tests generate a value that is associated with a particular P value. Among the variety of common statistical tests are: F, t, Z, and chi-square. The choice of a test depends upon the conditions of a study, e.g., what type of outcome variable used, whether or not the patients were randomly selected from a larger population, and whether it can be assumed that the outcome values of the population have a normal distribution or other type of distribution.

Surrogate endpoint: a measure that is used as a substitute for a clinical endpoint of interest such as morbidity and mortality. They are used in clinical trials when it is impractical to measure the primary endpoint during the course of the trial, such as when observation of the clinical endpoint would require long follow-up. A surrogate endpoint is assumed, based on scientific evidence, to be a valid and reliable predictor of a clinical endpoint of interest. Examples are decrease in blood pressure as a predictor of decrease in strokes and heart attacks in hypertensive patients, increase in CD4+ cell counts as an indicator of improved survival of HIV/AIDS patients, and a negative culture as a predictor of cure of a bacterial infection. (See also biomarker and intermediate endpoint.)

Systematic review: a form of structured literature review that addresses a question that is formulated to be answered by analysis of evidence, and involves objective means of searching the literature, applying predetermined inclusion and exclusion criteria to this literature, critically appraising the relevant literature, and extraction and synthesis of data from evidence base to formulate findings.

Technological imperative: the inclination to use a technology that has potential for some benefit, however marginal or unsubstantiated, based on an abiding fascination with technology, the expectation that new is better, and financial and other professional incentives.

Technology: the application of scientific or other organized knowledge--including any tool, technique, product, process, method, organization or system--to practical tasks. In health care, technology includes drugs; diagnostics, indicators and reagents; devices, equipment and supplies; medical and surgical procedures; support systems; and organizational and managerial systems used in prevention, screening, diagnosis, treatment and rehabilitation.

Teleoanalysis: an analysis that combines data from different types of study. In biomedical and health care research, specifically, it is “the synthesis of different categories of evidence to obtain a quantitative general summary of (a) the relation between a cause of a disease and the risk of the disease and (b) the extent to which the disease can be prevented. Teleoanalysis is different from meta-analysis because it relies on combining data from different classes of evidence rather than one type of study” (Wald 2003).

Time lag bias: a form of bias that may affect identification of studies to be included in a systematic review; occurs when the time from completion of a study to its publication is affected by the direction (positive vs. negative findings) and strength (statistical significance) of its results.

Treatment effect: the effect of a treatment (intervention) on outcomes, i.e., attributable only to the effect of the intervention. Investigators seek to estimate the true treatment effect based on the difference between the observed outcomes of a treatment group and a control group. Commonly expressed as a difference in means (for continuous outcome variables); risk ratio (relative risk), odds ratio or risk difference (for binary outcomes such as mortality or health events); or number needed to treat to benefit the outcome of one person. (Also known as effect size.)

Type I error: same as false-positive error.

Type II error: same as false-negative error.

Utility: the relative desirability or preference (usually from the perspective of a patient) for a specific health outcome or level of health status.

Validity: The extent to which a measure or variable accurately reflects the concept that it is intended to measure. See internal validity and external validity.

results matching ""

    No results matching ""