Syndicate

Syndicate content

Evaluation

The Population Health Impact Institute's goal is to improve the validity of evaluations of defined population health programs.  Our goal is not "scientific proof," but rather the use of "evidence-based" principles -- based upon the "scientific method"-- specifically designed for the real-world study of population health programs.

This page has two parts:  1)  Principles for Selecting an Evaluator.  2)  Five Evaluation Principles.

Principles to Consider When Selecting a Program Evaluator for Population Health Management Programs

The quality of the evidence in all science, including evidence-based medicine, is based upon methods that are transparent and verifiable for credibility. The Population Health Impact Institute advocates for a similar evidence-based approach when evaluating the clinical and economic value of Population Health Management (PHM) programs. To help guide your choice of a program evaluator that is capable of a sound assessment of population health programs we advise consideration of these seven principles: 

  1. Populations Defined: "Is the intervention population defined?"
       The inclusion and exclusion criteria of populations to be studied should be well-described and reproducible.
  2. Process Metrics Described: "Are key process metrics described / defined?"
       The metrics used by the program evaluator to represent key processes for the population health management (PHM) intervention must include well-described (and reproducible) definitions for numerators & denominators.
  3. Outcome Metrics Described: "Are key outcome metrics described / defined?"
       The metrics used by the program evaluator to represent the key objectives targeted by the PHM program (clinical, economic, satisfaction, functionality, etc.) should have well-described (and reproducible) definitions for numerators and denominators.
  4. Impact Method Transparency: "Is the impact methodology transparent / described?"
       The method(s) (including equations) proposed to demonstrate that the PHM program processes itself caused the outcome(s)--rather than other factors-must be sufficiently described and reproducible.
  5. Validity:  "Do the metrics and methods used on the defined population meet minimum requirements for internal and external validity*?"
       The evaluator should address issues related to measurement error, random error, systematic bias, and inferential error (all collectively known as internal validity).  In addition, the evaluator should address issues related to the relevance, or generalizability, of the study to real-world issues (known as external validity).
  6. Independent Verification Potential: "Can the results be independently verified for credibility and validity*?"
       Results obtained from metrics and methods used on the population(s) studied should be capable (i.e., technically, principles 1-4; and legally) of independent verification for validity* by an impartial third-party.
  7. Qualifications Disclosed: "Does the evaluator have the necessary skill sets to perform credible impact assessments?"
       The evaluator should a confirmable background (education, experience, and/or credentials) in quantitative analysis and research design.
  8. Interests Disclosed: "Are potential conflicts of interests disclosed?"
       A statement regarding potential conflicts-of-interests should be made, or in the event of no conflicts, the statement "nothing to disclose" should be made.

 *"Validity" can be assessed by quality and strength of evidence scoring systems.

© 2006 Population Health Impact Institute. Loveland, Ohio. All Rights Reserved. Not for commercial use. PHI Institute encourages the non-commercial use of these principles, with attribution to Population Health Impact Institute.   The Population Health Impact Institute holds a copyright to these principles and can rescind or alter them at  any time.  They may not be altered by anyone other than the PHI Institute at any time.  The PHI institute reserves the exclusive right to use them for public and commercial use and in all benchmarking studies.

Five Evaluation Principles

(summary)

  1. Metric Quality*: The metrics used in value attribution studies need to be described in sufficient detail, so an independent, skilled person can replicate the methods and achieve the same results.
  2. Equivalence Quality: The reference group selected should have the same expected outcome as the intervention group, had the latter not experienced the intervention.
  3. Statistical Quality: The variation inherent in significant risk factors among the individuals that comprise the intervention and reference population must be taken into account.
  4. Conceptual Quality: The hypothetical pathway -- Intervention "causes" the proximate outcome, which in turn, "causes" the ultimate outcome -- must be justified by prior evidence and logic and then must be tested for soundness.
  5. Generalizability Quality: Evaluators need to clearly explain that their results, and methods use to generate those results, are applicable to other defined populations.

Notes to professional health service researchers: 

The first four principles relate to "internal validity" and the fifth to "external validity." In addition, real-world experience has taught us to make a clear distinction between random error (what some call 'precision') and systematic error (what some call 'validity') and NOT make these a sub-set of measurement error.  We have decided to elevate each to their own principle with different names. Moreover, the key concept of "equivalance" is often seen as synomous with "randomization,"  or "random assignment," or "randomized clinical trials."   We firmly believe that the equivalence principle must be tested in all study design (with a description of strengths and weaknesses) and it must NOT be assumed that it is achieved in randomized clinical trials (expect perhaps for unmeasured co-variates). 

Thus, the credibility of a study should not be judged on study design itself (as is often done: expert opinion, observational, quasi-experimental, experimental) but on the extent to which these principles are shown to have been achieved (or not) regardless of study design employed.

*Change by BOD from Data Quality (10-1-06)

Five Evaluation Principles

(detail)

The Five Evaluation Principles are being used to score a study's Validity using the PHI Institute's scoring template based upon the Evaluation Principles. This official scoring is being done by an the PHI Institute Expert "Delphi Panel." Coupled with the Evaluation Ethics Score for TRANSPARENCY this will produce a final CREDIBILITY score.

Click here for example of visual display of summary VALIDITY Scoring:  PHI Institute's "Validity PentagonSM"

© 2005 Population Health Impact Institute. Loveland, Ohio. All Rights Reserved. Not for commercial use. PHI Institute encourages the non-commercial use of these five principles, with attribution to the PHI Institute.  The Institute reserves the exclusive right to use them for public and commercial use and in all benchmarking studies. 

Evaluation Principles

  1. Metric Quality: Goal:To reduce "measurement error."

    This principle requires that evaluators use comparable metrics and clearly describe the criteria used to define the population, the time horizon, source of data, and definitions of all metrics, in sufficient detail that a skilled third party could follow the method and replicate results.

  2. Equivalence Quality: Goal: To reduce "differential error" (i.e., bias).

    This principle requires the use of a reference population to compare to the intervention population. Equivalence means that this reference group should be expected to have the same outcomes as the intervention population, were the latter not receiving the intervention. Risk factors that can influence outcomes include metrics that are both measured and unmeasured. Thus approaching true equivalence requires the careful selection of the defined population in both groups (to reduce confounding at the beginning) and/or some mathematical adjustment after population selection (to reduce confounding after selection).

  3. Statistical Quality: Goal: To reduce "non-differential error" (i.e., random error).

    This principle requires that evaluators use appropriate statistical tests, correct population sizes ("Power calculations"), and inferential models that are consistent with the underlying assumptions of the models selected.

  4. Conceptual Quality: Goal: To reduce "inferential error."

    This principle requires that evaluators provide a defensible (empirical and/or logical) and testable intervention pathway that starts with the intervention and progresses to both proximate and ultimate outcomes.

  5. Generalizability Quality: Goal: To reduce "transmission error" (i.e., extrapolation error).

    This principle requires evaluators and third party reviews to justify the relevance of both the methods and results to a comparable population.

Notice:

The evaluation and ethical principles, standards and measures ("Intellectual Property" or IP) were developed by and are owned by the Population Health Impact Institute ("PHI Institute"). These IP are not to be construed as clinical guidelines and do not establish a standard of scientific-based clinical care. The PHI Institute makes no representations, warranties, or endorsement about the quality of any organization that uses or reports these IP and the PHI Institute has no liability to anyone who relies on such IP.

The Population Health Impact Institute holds a copyright to these IP and can rescind or alter them at  any time.  These IP may not be altered by anyone other than the PHI Institute at any time.  PHI Institute retains the exclusive right for public, commercial, and benchmarking use.

Anyone desiring to use or reproduce these IP without modification for a non-commercial purpose may do so without obtaining any approval from the PHI Institute as long as attribution to the PHI Institute is clearly noted ("Used with Permission of the Population Health Impact Institute").  All commercial uses must be approved  in writing by the PHI Institute and are subject to a license at the discretion of the PHI Institute.  © 2005.  Population Health Impact Institute.  All Rights Reserved. 

Principle Source Material for the Five Principles

(as specifically applied to DM and other defined population health programs)

  1. "Metric Quality" Principle 

    This was based, in part, on the discussion of measurement error in selection of groups and selection of metrics in: Wilson TW, Gruen J, Thar W, Fetterolf D, Patel M, Popiel RG, Lewis A, Nash DB. . Assessing ROI of defined-population disease management interventions. Joint Commission Journal of Quality and Safety. 2004; 30 (11); 614-21.

    The importance of metric comparability was also discussed in: Wilson TW. Evaluating ROI in State Medicaid Programs. Issue Brief: State Coverage Initiatives. November 2003.  Published by Academy Health (program of Robert Wood Johnson Foundation). http://www.statecoverage.net/pdf/issuebrief1103.pdf

  2. "Equivalence Quality " Principle

    This principle relates to the assertion of comparability between the reference and the intervention group, with or without randomization as discussed in:  Wilson TW, MacDowell M. Framework for Assessing Causality in Disease Management Programs: Principles. Disease Management. 2003, 6: 143-58.

  3. "Statistical Quality"  Principle

    For a brief critical discussion of different type of statistical inference thinking, see: Rothman KJ, Greenland S.  "Approaches to Statistical Testing" In: Modern Epidemiology.  2nd Edition. Lippincott-Raven: Philadelphia, 1998:  pgs 183-99.

  4. "Conceptual Quality" Principle

    A practical discussion of causal pathway thinking, in a pragmatic sense, can be found in:  Wilson TW, MacDowell M. Framework for Assessing Causality in Disease Management Programs: Principles. Disease Management. 2003, 6: 143-58.

  5. "Generalizability" Principle:

    See discussion on "external validity" as applied to field studies in: Cook TD, Campbell DT. Quasi-Experimental Design & Analysis for Field Settings.  Houghton Mifflin Company, Boston: 1979.

Additional Important References on Five Evaluation Principles

On all Five Principles:

Dove HG, Duncan I. An Introduction to Care Management Intervention and their implications for Actuaries. Paper 2 of 6: Actuarial Issues in Care Management Evaluations, Society of Actuaries, August 10, 2004.

Fitzner K, Siderov J, Fetterolf D, Wennberg D, Eisenberg E, Cousins M, Hoffman J, Haughton J, Charlton W. Krause D, Woolf A, McDonougb K, Todd W, Fox K, Juster I, Stiefel M, Villagra V, Duncan I . "Principles for Assessing Disease Management Outcomes." Disease Management. 2004;7:191-201.

Linden A, Roberts N. A users guide to the disease management literature: recommendations for reporting and assessing program outcomes. American Journal of Managed Care. 2005;11(2):81-90.

On Metric Quality Principle:

Duncan I. "Accuracy in the Assessment of Return on Investment of Defined Population Interventions" Joint Commission Journal of Quality and Safety. 2005; 31 (6); 357. Wilson T, Thar W, Gruen J. "Reply."  Joint Commission Journal of Quality and Safety. 2005; 31 (6); 358.
 

Shopping cart

View your shopping cart.