A. Attributing Impact to HTA Reports

The impact of a HTA depends on diverse factors. Among these are target audiences’ legal, contractual, or administrative obligations, if any, to comply with the HTA findings or recommendations (Anderson 1993; Ferguson, Dubinsky 1993; Gold 1993). Regulatory agency (e.g., the FDA in the US) approvals or clearances for marketing new drugs and devices are translated directly into binding policy. In the US, HTAs conducted by AHRQ at the request of CMS are used to inform technology coverage policies for the Medicare program, although CMS is not obligated to comply with findings of the AHRQ HTA. The impacts of NIH consensus development conference statements, which were not statements of government policy, were inconsistent and difficult to measure. Their impact appeared to depend on a variety of factors intrinsic to particular topics, the consensus development process itself, and a multitude of contextual factors (Ferguson 1993; Ferguson 2001).

The task of measuring the impact of HTA can range from elementary to infeasible. As noted above, even if an intended change does occur, it may be difficult or impossible to attribute this change to the HTA. A national-level assessment that leads to recommendations to increase use of a particular intervention for a given clinical problem may be followed by a documented change in behavior consistent with that recommendation. However, the recommendation may be made at a time when the desired behavior change is already underway, third-party payment policy is already shifting in favor of the technology, a strong marketing effort is being made by industry, or results of a definitive RCT are being made public.

As is the case for attributing changes in patient outcomes to a technological intervention, the ability to demonstrate that the results of an HTA have an impact depends on the conditions under which the findings were made known and the methodological approach used to determine the impact. Evaluations of the impact of an HTA often are unavoidably observational in nature; however, under some circumstances, quasi-experimental or experimental evaluations have been used (Goldberg 1994). To the extent that impact evaluations are prospective, involve pre- and post-report dissemination data collection, and involve directed dissemination to clearly identified groups with well-matched controls (or at least retrospective adjustment for reported exposure to dissemination), they are more likely to detect any true causal connection between an HTA report and change in policy or behavior. Even so, generalizing from one experience to others may be impractical, as it is difficult to describe and replicate the conditions of a particular HTA report dissemination.

results matching ""

    No results matching ""