OUP user menu

Evidence-based criteria in the nutritional context

Jeffrey Blumberg, Robert P Heaney, Michael Huncharek, Theresa Scholl, Meir Stampfer, Reinhold Vieth, Connie M Weaver, Steven H Zeisel
DOI: http://dx.doi.org/10.1111/j.1753-4887.2010.00307.x 478-484 First published online: 1 August 2010

Abstract

During the last decade, approaches to evidence-based medicine, with its heavy reliance on the randomized clinical trial (RCT), have been adapted to nutrition science and policy. However, there are distinct differences between the evidence that can be obtained for the testing of drugs using RCTs and those needed for the development of nutrient requirements or dietary guidelines. Although RCTs present one approach toward understanding the efficacy of nutrient interventions, the innate complexities of nutrient actions and interactions cannot always be adequately addressed through any single research design. Because of the limitations inherent in RCTs, particularly of nutrients, it is suggested that nutrient policy decisions will have to be made using the totality of the available evidence. This may mean action at a level of certainty that is different from what would be needed in the evaluation of drug efficacy. Similarly, it is judged that the level of confidence needed in defining nutrient requirements or dietary recommendations to prevent disease can be different from that needed to make recommendations to treat disease. In brief, advancing evidence-based nutrition will depend upon research approaches that include RCTs but go beyond them. Also necessary to this advance is the assessing, in future human studies, of covariates such as biomarkers of exposure and response, and the archiving of samples for future evaluation by emerging technologies.

  • benefit
  • evidence-based
  • nutritional policy
  • randomized clinical trials
  • risk

INTRODUCTION

In a Medline search of article titles, the term “evidence-based” occurred less than 100 times in articles published in 1995. Since then, citations have risen steadily to nearly 7,900 in 2009 alone. This level of occurrence provides ample documentation of a substantial shift in both awareness and vocabulary in the community of scientists and policymakers involved with the clinical sciences. Evidence-based medicine (EBM) was established for the evaluation of medical interventions. It provides a hierarchy of research designs, with the results of randomized, placebo-controlled trials (RCTs) considered the highest level of evidence.1,2 EBM and its underlying concepts and methods were soon directly extended to the field of clinical nutritional science as evidence-based nutrition (EBN). Beginning with the 1997 Dietary Reference Intakes,3 the Institute of Medicine explicitly sought to provide the evidence base for its recommendations. A similar approach was used in developing the DHHS Dietary Guidelines for Americans, beginning with the 2005 edition.4 Similarly, the U.S. Food and Drug Administration has put forth a set of evidence criteria for nutrient-related health claims5,6 and professional associations such as the American Dietetic Association7 have promulgated EBN guidelines for their own policies and publications. A popular approach has been the use of evidence-based systematic reviews and meta-analyses; their application to nutrition questions has been recently reviewed.811 Adherence to EBN guidelines is increasingly required by peer-reviewed nutrition journals.

While multiple research approaches in nutrition science afford evidence of nutrient effects, there often appears to be an almost exclusive reliance on the RCT as the only type of evidence worthy of such consideration (e.g., references1216). However, certain features of EBM seem ill-suited to the nutrition context.1719 Some of the differences between the evaluation of drugs and nutrients cited previously18 are as follows: (i) medical interventions are designed to cure a disease not produced by their absence, while nutrients prevent dysfunction that would result from their inadequate intake; (ii) it is usually not plausible to summon clinical equipoise for basic nutrient effects, thus creating ethical impediments to many trials; (iii) drug effects are generally intended to be large and with limited scope of action, while nutrient effects are typically polyvalent in scope and, in effect size, are typically within the “noise” range of biological variability; (iv) drug effects tend to be monotonic, with response varying in proportion to dose, while nutrient effects are often of a sigmoid character, with useful response occurring only across a portion of the intake range; (v) drug effects can be tested against a nonexposed (placebo) contrast group, whereas it is impossible and/or unethical to attempt a zero intake group for nutrients; and (vi) therapeutic drugs are intended to be efficacious within a relatively short term while the impact of nutrients on the reduction of risk of chronic disease may require decades to demonstrate – a difference with significant implications for the feasibility of conducting pertinent RCTs.

Nevertheless, it is indisputable that the RCT, in one of its variant forms, is the clinical study design that best permits strong causal inference concerning the relationship between an administered agent (whether drug or nutrient) and any specific outcome. Both drug indications and health claims for nutrients that are backed by one or more well-conducted RCTs are appropriately considered to have a more persuasive evidence base than corresponding claims based primarily upon observational data.20 However, it is also generally understood, if not often acknowledged, that it can be difficult to implement RCTs correctly. For certain types of questions, such as those concerning epigenetic effects (which seem increasingly likely for several nutrients), RCTs would often be precluded on both ethical and feasibility grounds. Or, when trying to assess the potential benefits of conditionally essential nutrients (e.g., α-lipoic acid and ubiquinone, which are synthesized in vivo) and putatively nonessential nutrients (e.g., carotenoids and flavonoids, which are nearly ubiquitous dietary constituents), the problem of providing this evidence through RCTs becomes even more challenging. Additionally, a poorly executed RCT may have no more (or even less) inferential power than a cohort study.21,22

For all these reasons, it seemed useful to suggest some ways to advance the current approach to EBN, ways which better reflect the unique features of nutrients and dietary patterns, and which also recognize the need to deal with uncertainty in situations in which evidence from RCTs might never be obtained. The perspective that follows constitutes a summary of the deliberations on these issues that took place at an invitational workshop convened in Omaha, Nebraska, September 3–4, 2008, by Tufts and Creighton Universities. In approaching this issue here, a few key questions are asked and an attempt is made to define the evidence needed to support nutritional policy decisions. Instances of some of the details, as well as brief allusions to the background science, are included in the Supporting Information available online.

PROOF OF WHAT BENEFIT?

By definition, an essential nutrient is a substance that an organism needs for optimal function and which must be obtained from the environment because it cannot be adequately synthesized in vivo. That nutrients produce benefits is a truism enshrined in the Dietary Reference Intakes of the Institute of Medicine,23 and in the intake recommendations of most nations of the world. Contrariwise, inadequate intakes produce dysfunction or disease. Hence, the association of inadequate intake with disease is not so much a matter of proof as of definition. A substance would not be an essential nutrient if low intake were not harmful; i.e., a null hypothesis analogous to that for a drug (“nutrient X confers no health benefit”) is not tenable for most nutrients. Instead the questions clinical nutrition scientists must ask are: (i) What is the full spectrum of dysfunctions or diseases produced by low intake of a nutrient? and (ii) How high an intake is required to ensure optimal physiological function or reduced risk for disease across all body systems and endpoints?

Among the many advances of modern nutritional science are (i) the recognition of long-latency deficiency diseases and (ii) the understanding that nutrients often act through several distinct mechanisms within the organism.24 Thus, inadequate intake of a single nutrient can result in multiple dysfunctions, some of which may be quite slow to manifest. Further, there often is not a sharp transition between health and disease, but a multidimensional continuum, with different organ systems in the same individual exhibiting varying sensitivities, and with individuals varying among themselves in sensitivity. The Recommended Dietary Allowances (RDAs) are designed to account for interindividual differences in requirements3 but, as implemented, they largely focus on single organ system endpoints, and do not usually deal with the multiplicity of a nutrient's effects throughout the body. Typically, policy-making bodies have tended to adopt the default position of defining the intake requirement mainly for prevention of the disease for which there is the clearest evidence or at least a clear consensus, i.e., the “index” disease.

This approach raises questions regarding the adequacy of such recommendations, since prevention of the nonindex diseases may require more than the intake needed to prevent the index disease. For example, the intake of dietary folate necessary to reduce the risk of neural tube birth defects is greater than that necessary to prevent macrocytic anemia,25 and the amount of vitamin D required to reduce the risk of falls and hip fracture in the elderly is greater than that required to prevent rickets or osteomalacia.3

For several nutrients, RCTs have been conducted with nonindex diseases as the outcome measure, but they have most often failed to show a significant effect on the occurrence of the selected disease endpoint (e.g., references2631). Such RCTs are often flawed, not so much in their conduct as in their design; for example, they do not provide a sufficiently low intake of the nutrient for the control group26,27 or they do not ensure adequate intake of other essential nutrients needed for the test nutrient to manifest its own proper effect.3234 It is worth noting that, in this latter respect, such nutrient RCTs emulate drug RCTs, which usually strive to eliminate all confounding variables and effect modifiers, rather than to optimize them.

ARE RANDOMIZED CONTROLLED TRIALS AVAILABLE TO TEST NUTRIENT EFFECTS?

In order to conduct a RCT that adequately tests the efficacy of a nutrient for a specific chronic disease, it will usually be important to ensure an adequate contrast in intake between the intervention and the control groups. The control intake is an approximate analog of the placebo control in drug RCTs. However, since sufficiently low intakes are associated with significant disease in some body systems, doing so can lead to serious ethical problems, particularly if the disease outcome is serious and/or irreversible, e.g., preeclampsia, hip fracture, neural tube defect, or myocardial infarction. In contrast to observational studies, which typically assess nutrient exposures ranging from low to high, most RCTs of nutrient effects have employed a control group receiving an intake typical of the population, oftentimes near the RDA, and certainly above the thresholds for many deficiency states, while the intervention group receives even more. This approach transforms the hypothesis ostensibly being tested to one of “more is better”. Such trials are ethical and feasible, but they often do not test the hypothesis that low intake of nutrient A causes (or increases the risk of) disease X. This is not to question the value of asking such secondary questions, but simply to stress that they are different questions.

EBN thus departs from the situation of EBM, where, for most interventions, the use of a no-intake control group is usually quite appropriate. In EBM, the hypothesis is that adding an intervention ameliorates a disease, whereas in EBN it is that reducing the intake of a nutrient causes (or increases the risk of) disease. This distinction is critical. No one proposes in EBM that a disease is caused by the absence of its remedy; whereas for nutrients the hypothesis is precisely that malfunction is caused by deficiency. A hypothesis about disease causation can rarely, if ever, be directly tested in humans using the RCT design. This is because in the RCT the disease/dysfunction occurs in at least some of the study participants, and the investigators must ensure that this will happen. Instead where EBN must operate is with respect to two related, but different questions: (i) In addition to disease X, does the inadequate intake of nutrient A also contribute to other diseases? and (ii) At what level of intake of nutrient A is risk of all related disease minimized or all related functions optimized?

In brief, it is unlikely that RCT evidence could feasibly or appropriately be produced with respect to the role of a nutrient for many nonindex-disease endpoints. Therefore, the majority of the evidence with respect to nutrients and nonindex diseases will continue, of necessity, to be derived from observational studies. That does not mean that action must be suspended. Over 30 years ago, Hill35 described guidelines to assess causation under such circumstances (see Supporting Information).

HOW MUCH CERTAINTY IS NECESSARY?

RCTs, if well designed and well executed, provide a high level of certainty that a specific intervention can reliably be counted on to produce a specific effect in a selected population. As a society, we have determined that a high level of certainty is required for the evaluation of efficacy for therapeutic drugs. Such a standard is justified by the usually high cost of such medical treatment, by the risk that therapeutic decisions based on inadequate evidence would shift treatment away from possibly more efficacious therapies, and from the need to balance benefit against the risks that accompany pharmacotherapy. These same concerns are substantially less pressing for nutrients. Nutrients are orders of magnitude less expensive than drugs and often exhibit a broader margin between efficacy and toxicity. Is the same high level of certainty required regarding the nutrient intake recommendations to prevent disease as is needed for drugs used to treat disease?

There is no simple answer to this question. Nevertheless, it seems clear that requiring RCT-level evidence to answer questions for which the RCT may not be an available study design will surely impede the application of nutrition research to public health issues. Moreover, to fail to act in the absence of conclusive RCT evidence increases the risk of forgoing benefits that might have been achieved with little risk and at low cost. This is not to suggest that the standards of what constitutes proof ought to be relaxed for nutrients, but to propose instead that nutrient-related decisions could be made at a level of certainty somewhat below that required for drugs. Under such circumstances, confidence in the correctness of a decision would necessarily be lower.

Figures 1 and 2 present these considerations graphically, where confidence in the correctness of a certain recommendation (vertical axis) is the dependent variable, expressed as a function of the following: i) the level of certainty (or strength of the evidence) relating a given intake to any specific effect; and ii) the benefit-to-risk ratio that follows from acting. “Acting” here means specifying an intake level as a recommendation for the general public (or approving a drug for a given indication). In EBN, the strength of the evidence, ranging from high to low, might be quantified in an ordinal fashion, such as “established”, “probable”, “likely”, and “unclear.” Here, “unclear” means simply no ability to decide one way or the other, i.e., the null position.

Figure 1

Three-dimensional plot depicting the relation between confidence that a decision to act or to implement a nutrient recommendation is the correct thing to do (the vertical axis), and the degree of certainty about efficacy (strength of the evidence) of the nutrient (left horizontal-plane axis), and the ratio of benefit to risk of the change in intake (right horizontal-plane axis). The surface represented by the grid illustrates a confidence outcome, incorporating the full range of inputs of efficacy and benefit : risk. (Copyright Robert P. Heaney, 2010. Used with permission.)

Figure 2

The decision plot for the relationship of figure 1, as implemented for drugs (A) and for nutrients (B). Any value above the cut-plane would permit action. Notice that a high benefit : risk ratio would permit action at a lower level of evidential certainty and vice versa. (Copyright Robert P. Heaney, 2010. Used with permission.)

As Figure 1 shows, confidence in the correctness of a decision to act rises as a function of both certainty and benefit : risk, reaching its maximum only when the levels of both certainty and benefit : risk are high. This would be typical of the drug decision context (Figure 2A). By contrast, Figure 2B depicts what would seem to be appropriate for nutrients, for which a lower level of certainty would be acceptable; i.e., the confidence needed to act would be less than that needed for drugs.

As inspection of Figure 2B shows, the intersection of the cut-point plane with the three-dimensional surface is a curved line. This line itself is a reflection of an inverse relation between certainty and benefit : risk for any given degree of confidence in the correctness of an action. Thus, for nutrients with high benefit : risk, less certainty might be adequate to permit action, whereas for nutrients with less potential benefit (or more potential risk), a higher certainty of efficacy would be needed.

Importantly, these figures are simply illustrative; their use here is not intended to propose a rigid, mathematical approach that could be applied robotically to such questions. The purpose is simply to illustrate a potential willingness to act for low-risk interventions with probable benefit and at a level of certainty below what would be needed for approval of medical interventions.

WHAT FEATURES AFFECT CERTAINTY?

It is interesting to note that while regulatory agencies from around the world rely on RCTs, there is a high degree of discordance regarding how different jurisdictions evaluate the strength of the evidence produced by the same studies for the substantiation of health claims for nutrients and foods. Thus, in advancing approaches to EBN, it will be useful to set forth some of the factors that we judge will affect the level of certainty (evidential strength) that various study designs offer (Table 1), as well as the factors that affect the level of confidence in a decision that may flow from any given degree of certainty (i.e., high benefit : risk ratio; important consequences of possible Type II error; low deployment cost; low opportunity cost; multiplicity of lines of supporting evidence).

View this table:
Table 1

Factors affecting the level of certainty of evidence provided by various study designs.

Additionally, certainty can be enhanced by ancillary measurements. Discussion of these features is further developed in the Supporting Information.

As listed in Table 1, an RCT gains or loses certainty depending upon whether or not the following apply: i) there is an adequate contrast in intake between the intervention and control group; ii) it has been replicated; iii) it suffered only minimal losses of sampling units; iv) it measured and controlled adequately for conutrient intakes; and (v) its estimate of effect size is large. While not all of those factors are absolutely necessary, each contributes a degree of certainty in its own right. These features are developed at greater length in the Supporting Information.

As RCT-based evidence may not be available ethically or feasibly to answer many nutrient-related questions, it is important to attend to the factors needed to support action when evidential certainty is less than perfect. The factors affecting confidence, as listed above, represent a start at this effort. Perhaps the most compelling concern regarding this issue is the fact that benefits may be forgone when action is deferred, i.e., the consequence of the type II error when the conclusion from available evidence is “not proven”. Offsetting that risk are the costs associated with action when the true effect is actually negligible or null. Therefore, low deployment cost and low opportunity cost should be important considerations. Any change in nutritional policy creates work for both industry and regulators, efforts that have a cost and that may displace other action that might have been more productive. There is no single or simple correct answer to these questions about cost, but it is worthwhile to stress that they must be factored into the decision matrix on a case-by-case basis.

CONCLUSION

Inadequate intakes of nutrients result in a variety of dysfunctions and diseases. The full spectrum of those untoward effects is unknown. Because deliberately reducing intake to deficient levels in humans is ethically impermissible, the RCT will often not be available as a means of elucidating many potential nutrient-disease relationships. The general principles of EBN can provide a sufficient foundation for establishing nutrient requirements and dietary guidelines in the absence of RCTs for every nutrient and food group. Sackett et al.,36 among the intellectual fathers of EBM, stressed nearly 15 years ago that EBM was “not restricted to randomized trials and meta-analyses”, a counsel that has been shunted aside in recent years. A general approach to acting in the absence of ultimate certainty should include a broader consideration of other research strategies along with revised estimates of the certainty level of the evidence and the confidence needed to act in support of public health. In such judgments, it will be important to assess the balance between the potential harm of making any given recommendation and the potential harm of not making it. Additionally, a key challenge will be to find appropriate educational strategies to convey varying levels of strength of evidence for a given recommendation.

Acknowledgments

This paper is the product of an invited workshop convened at Creighton University, Omaha, Nebraska, USA, September 3–4, 2008. It was supported by Creighton University research funds and by unrestricted grants from ConAgra Foods Inc., Omaha, Nebraska; Dairy Management Incorporated, Rosemont, Illinois; and the Council for Responsible Nutrition, Washington, DC.

Each author had a role in generating the concepts and preparing the manuscript.

Declaration of interest

CW has received research grants from Dairy Management Incorporated, Preisland Foods, General Mills, and Tate & Lyle. None of the other authors has relevant interests to declare.

Footnotes

  • The authors have worked in nutritional science, policy, and practice throughout most of their professional careers, serving, for example, on the US Dietary Guidelines Committee and various advisory panels of the Institute of Medicine concerned with dietary reference intakes. Several have chaired National Institutes of Health study sections and have been recipients of major nutrition awards of the American Society for Nutrition and the United States Department of Agriculture.

REFERENCES

View Abstract