You are here

P T. 2013;38(8): 465-472, 482-483

Transparency in Evidence Evaluation And Formulary Decision-Making

From Conceptual Development to Real-World Implementation
Bonnie B. Dean MPH, PhD
Kelly J. Ko PhD
Jennifer S. Graff PharmD
A. Russell Localio JD, MA, MPH, MS, PhD
Rolin Wade RPh, MS
Robert W. Dubois MD, PhD


Establishing a better understanding of the relationship between evidence evaluation and formulary decision-making has important implications for patients, payers, and providers. The goal of our study was to develop and test a structured approach to evidence evaluation to increase clarity, consistency, and transparency in formulary decision-making.

Study Design:

The study comprised three phases. First, an expert panel identified key constructs to formulary decision-making and created an evidence-assessment tool. Second, with the use of a balanced incomplete block design, the tool was validated by a large group of decision-makers. Third, the tool was pilot-tested in a real-world P&T committee environment.


An expert panel identified key factors associated with formulary access by rating the level of access that they would give a drug in various hypothetical scenarios. These findings were used to formulate an evidence-assessment tool that was externally validated by surveying a larger sample of decision-makers. Last, the tool was pilot-tested in a real-world environment where P&T committees used it to review new drugs.


Survey responses indicated that a structured approach in the formulary decision-making process could yield greater clarity, consistency, and transparency in decision-making; however, pilot-testing of the structured tool in a real-world P&T committee environment highlighted some of the limitations of our structured approach.


Although a structured approach to formulary decision-making is beneficial for patients, health care providers, and other stakeholders, this benefit was not realized in a real-world environment. A method to improve clarity, consistency, and transparency is still needed.


Recent attention has been focused on the funding and governance of comparative effectiveness research (CER), but less emphasis has been placed on the relationship between evidence and the resulting medical policy or formulary decisions. Understanding these relationships is important in managed care, because P&T (formulary) committees routinely translate evidence into policy as they make decisions about access to drugs. Coupled with the Patient Protection and Affordable Care Act (PPACA) of 2010, which calls for the creation of health insurance exchanges in which people can select from a variety of health plans with varying formulary coverage,1 a better understanding of the relationship between evidence-based medicine and coverage is needed.

However, the decision-making process used by P&T committees is far from clear. According to Eddy,2,3 organizations consider empirical evidence relevant to the formulary decision-making process, yet they also rely on subjective preferences, which can account for some of the differences observed among various institutions.24 Moreover, real-world decision-making is influenced by a multitude of factors.5,6 For example, access to a new therapeutic agent to treat a fatal cancer might require a level of evidence (such as a lower-risk benefit threshold) that differs from the level of evidence required for a new therapy for a nonfatal condition associated with other acceptable treatments (such as when few adverse effects would be acceptable).

Other factors include concerns about the safety of similar available agents and the direct costs, cost offsets, or the total cost of care of a new drug compared with current care. Without a well-defined, consistent approach in assessing evidence, access may vary from one institution to another, from one therapeutic category to another, or at various points in time. For patients who must evaluate multiple health plans, it is important that they understand what evidence will be considered for conditions that they currently have or that they predict might develop. These inconsistencies make it more difficult for patients and health care providers to access similar treatment options when switching from or choosing a health plan. For employers and unions, understanding the decision-making process helps them comprehend how their benefits will be administered. Increased transparency in terms of evidence evaluation can ensure that drug manufacturers produce studies and endpoints that are important to payers as they make formulary decisions. In all, this unexplained variability is disconcerting to patients, providers, and employers, as well as manufacturers.79

Although the evidence that is reviewed and the final decisions about a drug are often documented, assumptions and other considerations are not.5 As a result, various factors that influence the decision-making process might be applied differently.3 From the perspective of those affected by the decision-making process, there is a need to improve the process in terms of:10

  • clarity: knowing which factors are influential.
  • consistency: understanding whether comparable scenarios are handled in a similar fashion at different times with various P&T committee members.
  • transparency: ensuring that those involved in (and perhaps those external to) the decision-making process understand what was decided and why.

The purpose of our study was to test a structured approach to making decisions regarding formulary access. We hypothesized that this approach might increase the clarity, consistency, and transparency in the formulary decision-making process.


The study consisted of three phases:

  • In 2009, an expert panel was convened to identify key factors in making decisions about formulary access—efficacy (the magnitude of potential benefit and outcome), safety (concerns about the agent or similar agents), cost (relative to current standard of care), and evidence certainty (confidence in the evidence showing safety and efficacy based on the number, sample size, design, or consistency of the studies). These factors were then combined into a framework for consideration and resulted in an evidence-assessment tool (Figure 1, see page 468). The panel then rated these factors according to their likelihood of providing access based on the corresponding magnitude of efficacy, safety, cost, and certainty of evidence (safety and efficacy).
  • After pilot-testing the tool, we tested its internal validity among a broad sample of formulary decision-makers in 2011.
  • We pilot-tested the tool in real-word P&T committee decision-making environments to evaluate its application to routine decisions made by the committee in 2011.
  • Except for the third phase, during which we used drugs under review by the actual P&T committees, all scenarios concerning decisions about formulary access involved the use of hypothetical drugs.

    Expert Panel

    The panel consisted of 10 advisors representing diverse geographic locations and with expertise in formulary management. Four participants represented managed care organizations (MCOs), two represented pharmacy benefit management (PBM) companies, and four were academic faculty members with experience in real-world decision-making. Of the panelists representing MCOs, two regional health plans and two national health plans were represented. Of the two PBM advisors, one had regional responsibilities and one had national responsibilities.

    A series of hypothetical scenarios encompassing drugs, procedures, and diagnostic tests were developed (Table 1). First, the panelists scored the significance of each endpoint, then rated the likelihood of providing access for each scenario on a 9-point scale (1 = very unlikely, 9 = very likely). Access was considered “unlikely” for ratings between 1 and 3, “uncertain” for ratings between 4 and 6, and “likely” for ratings between 7 and 9.

    The goal was to determine how quality of evidence, when presented along with contextual factors (e.g., efficacy, safety, cost, and evidence certainty), affected the likelihood of providing access to health technologies. Using a modified Delphi approach,11 we identified multiple decision-making factors that could influence access. To test the influence of these factors, we created an evidence-assessment tool, using input from the expert panel, to be scored among a larger sample of decision-makers.


    Using a ratio of one medical director for every two pharmacy directors, we recruited 84 participants who were involved in formulary decision-making to participate in the survey. Those who qualified received a survey asking them to rate the likelihood of formulary access based on hypothetical scenarios of varying degrees of efficacy, safety, cost, and evidence certainty. Upon our receipt of completed surveys, participants received a $150 gift card for participation. The study was reviewed and approved by the Western Institutional Review Board, located in Olympia, Washington.

    Based on input from the expert panel, each survey included evidence related to the efficacy, safety, and cost of four hypothetical drugs in four clinical areas: hypertension, osteoporosis, Alzheimer’s disease, and breast cancer. To understand the contribution of each factor to the likelihood of granting access, we assessed a combination of efficacy, safety, and cost information for each of the four drugs (see Figure 1). Each combination of efficacy, safety, and cost information was associated with six evaluations (scenarios) based on varying levels of evidence certainty for efficacy (medium, high) and safety (low, medium, high). Each combination of efficacy, safety, and cost information was then assessed for two levels of evidence. There were 192 possible scenarios for evaluation.

    To minimize rater burden, we implemented a balanced incomplete block design whereby raters were assigned a subset of scenarios (48) instead of all possible scenarios within the survey (192).12 Scenarios involving each of the four drugs were distributed for 84 respondents to elicit their responses to all combinations of each drug and scenario while ensuring that each respondent would have only 48 scenarios to rate.13,14 Panelists were asked to rate each scenario on a scale from 1 through 9, with 1 representing no access (drug not on formulary; a 100% copay) and 9 representing open access (drug on tier 1; a generic copay).

    We calculated the frequency of agreement and disagreement among raters of the same scenarios. We defined disagreement as a response (along the 9-point scale), where at least one rater scored between 1 and 3 (1 = no access; 3 = low access) while at least one other rater scored the same scenario between 7 and 9 (7 = high access, 9 = open access). We then fit mixed-effects linear-regression models using maximum likelihood, with rater as a random effect, to estimate the association of scenario, cost, efficacy, safety, and evidence certainty (safety and efficacy) with the 9-point scale. Significant interactions between scenarios were retained in the model, and standardized mean scores (after adjustment for covariates) for each scenario were estimated and contrasted.

    P&T Committee Pilot Test

    Our goal was to set up a pilot test of the evidence-assessment tool with P&T committees while considering plan characteristics that could affect decision-making, such as regional versus national, geographic location, and commercial versus public. On the basis of this rationale, we recruited four P&T committees (two national, one regional, one state health plan) for participation. Although all four of the committees agreed to participate, only three completed pilot-testing (one national, one regional, and one state health plan, respectively).

    We worked with each committee to identify a future meeting and the drugs that would be reviewed at the meeting. We also identified one staff member within each committee to lead the pilot test. Each designated staff member was given detailed instructions, training, and relevant materials for implementation at the meeting. In addition, because each committee had a different drug under review, we worked with each committee to provide a customized evidence-assessment tool for that drug, using the same evidence tables that were developed for their standard review.

    The P&T committees were asked to first evaluate the drug and then decide about coverage using their standard process. After they reached a decision, they were asked to discuss and re-evaluate the same drug, using the evidence-assessment tool, which was populated with the same evidence that had been used during their initial decision.

    Following implementation, the committee members were asked to complete a brief 10-item evaluation examining ease of use, clarity of content, perceived benefits of the new approach, and suggestions for improvement. The members rated how strongly they agreed with various statements about the tool (1 = strongly disagree, 5 = neutral, 9 = strongly agree) (Figure 2). We debriefed a designated staff member (or sometimes more than one) from each committee during a structured interview to ascertain additional qualitative insights regarding the committee’s experience during the exercise.


    Expert Panel

    We hypothesized that the importance of the endpoints achieved (quality of life, clinical response, and survival), a lack of alternative therapies, a lack of prior safety concerns with similar interventions, and decreasing cost would result in increased access scores. In fact, we found a wide range of access ratings for the various scenarios in which the influence of individual factors was not independent but context-sensitive. The likelihood of granting formulary access also varied for each individual panelist in the hypothetical scenarios as well as from one panelist to another for the same scenario.

    In particular, when evidence certainty was low, there were few differences in ratings among the panelists for the same scenario. However, differences among individual panelists became apparent as evidence certainty increased. For example, in a setting of low-certainty evidence, there was no apparent relationship between access and the importance of the endpoint; however, for high-certainty evidence, the importance of the endpoint was associated with greater access.

    Overall, we found that multiple factors influenced the decision-making process as well as the interdependency of these factors in a context-specific manner.


    A total of 84 decision-makers agreed to participate in the survey, and 79 participants (94%) provided complete sets of ratings. Of those raters who participated, 23 were medical directors and 56 were pharmacy directors. Taken together, 3,783 ratings out of a possible 4,032 (94%) were usable for analysis. Of the 3,783 ratings, 960 were from medical directors (25%) and 2,823 were from pharmacy directors (75%).

    Most participants were decision-makers in MCOs (65.8%), followed by those in PBMs (13.9%), hospital P&T committees (11.4%), integrated health systems (6.3%), state Medicaid drug reviews (1.3%), and others (1.3%). In addition, 83.5% of the raters offered commercial coverage, 70.9% offered Medicare, and 62.0% offered Medicaid. Participants represented organizations with wide geographic coverage, serving regional (32.9%), national (24.1%), state (30.4%), and local (12.7%) populations.

    Health plans of various sizes were also represented: 13% covered fewer than 100,000 beneficiaries; 52% covered 100,000 to 1 million; and 35.1% covered more than 1 million.

    Consistent with results from our expert panel, the participants displayed a wide range of ratings for the scenarios using a wide variety of scores. For example, individual raters had substantial disagreements in their evaluations of individual scenarios (median absolute deviation, 0.25–2.19), and formulary access ratings were lower among pharmacy directors than among medical directors (mean, 3.92 vs. 4.36, respectively; P = 0.093).

    After we made adjustments for the drugs and questions, we found that ratings among respondents differed more than expected from random variation, suggesting that respondents themselves differed in their access ratings. Even after adjusting for covariates (such as medical directors vs. pharmacy directors), the ratings differed significantly by drug, level of efficacy, safety, cost, and evidence certainty (safety and efficacy). These findings suggested that each construct (efficacy, safety, cost, and evidence certainty) influenced how access was rated (Table 2, page 471). Overall access (i.e., ratings) was highest for the new breast cancer drug and was significantly greater than ratings for the new osteoporosis and Alzheimer’s drugs. Ratings for the new hypertension drug were significantly lower than ratings for the other drugs.

    In particular, varying levels of cost and evidence certainty were associated with greater changes in access ratings compared with varying the magnitude and type of efficacy and safety concerns. As a result, cost and evidence certainty were more likely than efficacy and safety data to influence level of formulary access. Overall, adjusted results indicate that each of the constructs (efficacy, safety, cost, and evidence certainty) influenced the level of access in a context-specific manner, with better evidence (especially cost and evidence certainty) associated with increased access (see Table 2).

    P&T Committee Pilot Test

    The three participating P&T committees represented one national, one regional, and one state health plan. A total of 40 P&T committee members participated. Committee membership consisted of 28 physicians, 10 pharmacists, one outcomes researcher, and one ethicist. Two committees used the evidence-assessment tool immediately after their initial standard review of a new drug, and one committee used the tool while revisiting a drug that had been reviewed at a previous meeting. (One of the three committees considers only clinical merits and does not consider cost in the reviews.)

    Of the three P&T committees, two indicated they had an unfavorable experience with the tool (Figure 3). The committees reported that the evidence considered as a result of using the tool (mean, 2.36) did not differ from the evidence considered during their standard review, and they did not believe that the tool increased consistency or transparency (mean, 2.45 and 3.23, respectively). Furthermore, all committees reported that their review processes and coverage decisions were already clear, consistent, and transparent.

    Overall, the P&T committees indicated that the tool unduly simplified their decision-making process and failed to account for the level of detail they consider. Committee members were resistant to what they perceived as the standardization of drug reviews, suggesting that factors such as study design, recommendations from professional guidelines or organizations, and perspectives of local physician or key opinion leaders were not included in the tool (see Figure 3).

    However, all P&T committees suggested that the tool might be better suited for organizations with fewer resources to construct a complete product monograph, for organizations using a PBM, or for facilities with a less rigorous or structured process for conducting product reviews. The committees also suggested that this structured tool might be better suited for evaluating evidence associated with individual constructs (efficacy, safety, cost, and evidence certainty) rather than evaluating multiple constructs taken together.


    As part of the PPACA of 2010, health insurance exchanges are expected to offer a variety of health plans from which to choose. Therefore, the relationship between evidence evaluation and formulary access should be transparent. Our goal was to explore whether a structured approach to evaluating evidence might provide additional clarity, consistency, and transparency in making decisions about formulary access. An expert panel helped to create a decision framework. The internal validity of the structure and constructs included within the evidence-assessment tool was then evaluated with a large sample of P&T decision-makers. Finally, we evaluated the utility of the tool by implementing it in a live, real-world P&T environment.

    Our survey results supported the usefulness of the structure and clinical factors in the framework. Greater formulary access was associated with greater efficacy, lower costs, fewer safety concerns, and greater evidence certainty. Based upon these findings, a more structured approach could yield greater clarity, consistency, and transparency in formulary decision-making.


    The survey results supported the concept of a structured tool, but the pilot test of the structured tool in a real-world P&T committee environment did not yield the same results; it highlighted some of the limitations of our structured approach. Committee members perceived little value in the tool’s ability to increase clarity, consistency, or transparency in their decision-making. They claimed that too many unique factors influenced each coverage decision and that decision-making could not be reduced to a single scale via a structured tool. They also perceived that such a reduction would limit a committee’s ability to consider these factors.

    Our structured tool was met with resistance in the P&T committee pilot study; however, the survey results suggested the constructs within the tool do hold potential. Still, we are left with the dilemma of how to attain better clarity, consistency, and transparency in the decision-making process. Although factors to be considered for decisions about formulary access (e.g., in clinical trials or economic models) and the resulting decisions were clear, the process (i.e., how information was weighted when compared with standard care) remained uncertain.

    Our experience underscores the importance of real-world evidence evaluation. We learned that our structured tool did not provide added value for decision-makers in a real-world P&T committee environment. In particular, the real-world evaluation demonstrated that although efficacy, safety, and cost were all considered important, trying to capture all three of these constructs in a structured manner limited the consideration of other factors related to the decision-making process. Although an overly structured approach that would standardize the decision-making process is neither favored nor considered practical by participating P&T committees, we still believe that a structured tool would provide value to the decision-making process.

    One idea suggested by the participants was to use the tool to evaluate individual constructs in greater detail instead of weighing several constructs simultaneously. For example, when evaluating the efficacy of a new drug, decision-makers could consider a combination of published literature, professional guidelines, local physician experience, and evidence certainty to determine access. A modified version of our tool might promote consistency in similar scenarios when multiple sources of evidence related to a single construct are examined. More consistency would also help to increase transparency to those external to the process by clarifying how similar evidence is evaluated and why. On the basis of feedback from the participants, therefore, we propose revising the tool to focus on individual constructs (e.g., efficacy) that are related to the decision-making process.

    Improving clarity, consistency, and transparency in formulary decision-making remains an important objective for individuals external to the process. For patients who might be considering a health plan based on existing formulary coverage, they are left with little information about how future formulary decisions will be made or how treatments important to their health are being adjudicated. Given the rising cost of employee health care, employers are seeking more information about how formularies are managed as they decide about prescription coverage. Transparency in coverage decisions is also important to the pharmaceutical industry so that companies understand how evidence relating to their drugs will be evaluated to create the most relevant information possible.

    Because of the scope of the project, a small expert panel developed the framework to create the evidence-assessment tool. We focused solely on efficacy, safety, cost, and evidence certainty in the decision-making process and excluded other factors, such as local perspectives and professional guidelines. Further, including the perspective of consumers (i.e., patients), as well as those who pay for health care coverage (i.e., employers), might have revealed other important factors to consider.

    For the survey, selection bias might have been present from our use of a convenience sample of decision-makers. Although rater burden was limited as a result of the study design, we did not achieve the full survey sample (79/84). In relation to the real-world implementation, the tool was pilot-tested by only three P&T committees, and this limited the circumstances and types of drugs being reviewed. Implementing the tool in a larger sample of P&T committees might have provided insight into more circumstances.

    Having all three committees first undergo their typical review process might have also biased their perception of the evidence-assessment tool, in that viewing the same evidence—only in a different format—could have influenced their impression of its value.

    Further, all of the P&T committees represented organizations with rigorous drug review processes and might not have been representative of the P&T population. Order bias might also have been present because the committees reviewed the same drug after completing their own review.


    Although there are theoretical merits of a structured approach to increase clarity, consistency, and transparency in formulary decision-making, those merits were not realized in real-world implementation. Given the dynamic and complex nature of the decision-making process undertaken by P&T committees, finding a structured approach that is acceptable in a real-world environment remains elusive. A method to improve clarity, consistency, and transparency is still lacking.

    Figures and Tables

    Example of an evidence-assessment tool.

    Evidence-assessment tool evaluation following pilot-testing. P&T committee members were asked about their experience with the tool and how strongly they agreed with the statements listed above on a scale of 1 to 9 (1 = Strongly disagree, 9 = Strongly agree). NPC = National Pharmaceutical Council.

    Ten-item evaluation to be completed by P&T committee members. (1 = Strongly disagree; 5 = Neutral; 9 = Strongly agree.)

    Hypothetical Scenarios for the Expert Panel

    Scenario Description
    Scenario A Metastatic breast cancer drug; 4-month overall survival benefit compared with standard care; course of treatment with new drug $20,000 more than standard care without new drug
    Scenario A1 Metastatic breast cancer drug; 4-month overall survival benefit compared with standard care; course of treatment with new drug $5,000 more than standard care without new drug
    Scenario A2 Metastatic breast cancer drug; 1-month overall survival benefit compared with standard care; course of treatment with new drug $20,000 more than standard care without new drug
    Scenario A3 Metastatic breast cancer drug; no overall survival benefit but 4-month progression-free survival benefit compared with standard care; course of treatment with new drug $20,000 more than standard care without new drug
    Scenario B Cox-2 inhibitor; reduction in gastrointestinal complications of 1 per 125 patients treated; no cardiovascular signal observed in new drug
    Scenario C Alzheimer’s disease drug; 1.5-point improvement in Mini-Mental State examination (MMSE) at 12 weeks compared with standard care
    Scenario D Annual intravenous bisphosphonate; 8% absolute reduction in vertebral fracture over 2 to 3 years; no safety concerns noted regarding osteonecrosis of the jaw
    Scenario D1 Annual intravenous bisphosphonate; absolute reduction in vertebral and hip fracture of 8% and 2%, respectively, over 2 to 3 years; no safety concerns noted regarding osteonecrosis of the jaw
    Scenario E New calcium-channel blocker; 2-mm reduction in systolic blood pressure compared with other agents in class; 30-day supply is $200 for new drug compared with $150 for currently available treatments
    Scenario E2 New calcium-channel blocker; 10-mm reduction in systolic blood pressure compared with other agents in class; 30-day supply is $200 for new drug compared with $150 for currently available treatments
    Scenario F Third-to-market drug-eluting stent; 3% absolute reduction in death, stroke, myocardial infarction, or reoperation over 2 years compared with non-drug–eluting stent; $5,000 higher cost
    Scenario F2 Third-to-market drug-eluting stent; 1% absolute reduction in death, stroke, myocardial infarction, or reoperation over 2 years compared with non-drug–eluting stent; $5,000 higher cost
    Scenario G New endometrial ablation technique; 3% absolute reduction in bleeding after 12 months; new treatment $2,500 compared with $2,200 for existing techniques
    Scenario H Computed tomography (CT) colonoscopy; sensitivity and specificity equivalent to traditional colonoscopy; half the cost of standard colonoscopy

    Standardized Survey Results

    Factor Level Score* 95% CI Difference (Contrast) P Value
    Rater type Medical 4.36 3.92–4.80 0.43 0.093
    Pharmacy 3.92 3.67–4.18
    Drug clinical focus 1. Hypertension 3.70 3.46–3.93 (2 vs. 1) 0.35
    (3 vs. 1) 0.31
    (4 vs. 1) 0.71
    2. Osteoporosis 4.04 3.81–4.27 (3 vs. 2) −0.04
    (4 vs. 2) −0.36
    3. Alzheimer’s 4.00 3.77–4.23 (4 vs. 3) 0.40 <0.001
    4. Breast cancer 4.40 4.17–4.63
    Efficacy concerns Smaller
    0.43 <0.001
    Safety concerns None
    0.22 <0.001
    Cost Moderate
    0.54 <0.001
    Evidence certainty: efficacy 2. Medium
    3. High
    0.72 <0.001
    Evidence certainty: safety 1. Low
    2. Medium
    3. High
    (2 vs. 1) 0.83
    (3 vs. 2) 0.76
    (3 vs. 1) 1.60

    *Scores were rated on a scale of 1 through 9, where a rating of 1 represents no access (i.e., not on the formulary, 100% co-pay) and 9 represents open access (i.e., tier 1 with a generic copay).

    CI = confidence interval.

    Author bio: 
    Dr. Dean is Principal Investigator of Research Services at Cerner Research in Kansas City, Missouri. Dr. Ko is a Research Scientist at Cerner Research in Culver City, California. Dr. Graff is Director of Comparative Effectiveness Research at the National Pharmaceutical Council in Washington, D.C. Dr. Localio is an Associate Professor at the University of Pennsylvania, Perelman School of Medicine, Department of Biostatistics and Epidemiology, in Philadelphia, Pennsylvania. Mr. Wade is currently Principal at RWE Solutions and Health Economics and Outcomes Research (HEOR) at IMS Health. At the time of this writing, he was employed at Cerner Research. Dr. Dubois is Chief Science Officer at the National Pharmaceutical Council in Washington, D.C.


    1. Department of Health and Human Services. Patient Protection and Affordable Care Act. Establishment of exchanges and qualified health plans: Proposed rule. Fed Reg 2011;76:41866–41927.Available at: Accessed June 26, 2013.
    2. Eddy D. Reflections on science, judgment, and value in evidence-based decision making: A conversation with David Eddy by Sean R. Tunis. Health Aff (Millwood) 2007;26;(4):w500–w515.
    3. Eddy DM. Clinical decision making: From theory to practice. Anatomy of a decision. JAMA 1990;263;(3):441–443.
    4. Bozzette SA, D’Amato R, Morton SC, et al. Pharmaceutical Technology Assessment for Managed Care: Current Practice and Suggestions for Improvement Santa Monica, Calif: RAND Corp. 2001;
    5. Olsen L, Aisner D, McGinnis J. IOM Roundtable on Evidence-Based Medicine. The Learning Healthcare System: Workshop Summary Washington, D.C: National Academies Press. 2007;
    6. Leung MY, Halpern MT, West ND. Pharmaceutical technology assessment: Perspectives from payers. J Manag Care Pharm 2012;18;(3):256–264.
    7. Teutsch SM, Berger ML, Weinstein MC. Comparative effectiveness: Asking the right questions, choosing the right method. Health Aff (Millwood) 2005;24;(1):128–132.
    8. Dranove D, Hughes EF, Shanley M. Determinants of HMO formulary adoption decisions. Health Serv Res 2003;38;(1 Part 1):169–190.
    9. Medicare Part D:Plans Sponsors’ Processing and CMS Monitoring of Drug Coverage Requests Could Be Improved Report No. GAO-08-47.Washington, D.C: U.S. Government Accountability Office. 2008;
    10. Drummond MF, Schwartz JS, Jonsson B, et al. Key principles for the improved conduct of health technology assessments for resource allocation decisions. Int J Technol Assess Health Care 2008;24;(3):244–258.
    11. Shekelle PG, Kahan JP, Bernstein SJ, et al. The reproducibility of a method to identify the overuse and underuse of medical procedures. N Engl J Med 1998;338;(26):1888–1895.
    12. Fleiss JL. The Design and Analysis of Clinical Experiments New York: John Wiley & Sons. 1986;
    13. Cochrane WG, Cox GM. Experimental Designs 2nd edNew York: John Wiley & Sons. 1992;
    14. Milliken GA, Johnson DE. Analysis of Messy Data Boca Raton, Fla: CRC Press. 2009;