The world often reminds us that association is not causation: Ice cream consumption and murder rates are correlated, but the former does not cause the latter. Rather, both go up during summer months. When we try to draw causal inferences from observational data for the effect of some exposure or treatment or potential cause on some outcome, we must always worry about the possibility of unmeasured confounding — that some third factor might explain away the association between our exposure and outcome.

As a professor at Harvard University in the School of Public Health and at Harvard’s Institute for Quantitative Social Science, I spend most of my time trying to understand and assess causality, both methodologically and empirically. A major component of my work involves developing sensitivity analysis calculations for unmeasured confounding in observational studies.

Sensitivity analysis is a means of retrieving unknown data, as best as possible, and integrating it with known data, thereby strengthening the validity of a clinical trial. Unknown data arises frequently in both randomized and observational trials; but in randomized trials, both measurable covariates (e.g., age, weight) and nonmeasurable covariates (e.g., religion, whether or not someone is a soccer player, etc.) are balanced, increasing the likelihood that any outcome will be due to the intervention and not due to differences in the two groups’ compositions. This is not the case for observational studies or trials, which are only able to be matched for measurable covariates. Therefore, in observational studies, it can be difficult to accurately assess whether the results of a study are based on the treatment or on some underlying, confounding factor, which is precisely what my work addresses.

In the case of observational trials, unmeasured confounding would be unknown data that may impact medical results in a significant way. Studying sensitivity analysis for unmeasured confounding, then, is a technique that attempts to assess how strong an unmeasured factor would have to be so that the observed exposure-outcome association is no longer significant.

In a recent paper, my colleague Peng Ding and I examined the association between maternal breastfeeding and infant mortality due to respiratory infections. Prior investigators found that after controlling for measured covariates such as age, birth weight, social status, maternal education, and family income, formula-fed infants were 3.9 times more likely to die of respiratory infections than breastfed infants. But the investigators didn’t control for maternal smoking. Might maternal smoking then be associated with less breastfeeding and greater infant mortality? In this case, 3.9 would be the observed risk ratio (RR), and maternal smoking would be the unmeasured confounder potentially influencing the outcome of the study.

In the past, approaches for unmeasured confounding in observational studies have sometimes been based only on intuitive appeals and criticized for making arbitrary assumptions. In an attempt to introduce a more evidence-based approach in this area, Peng Ding and I recently introduced the E-value, a metric to get around these limitations.

The E-value is calculated by the formula E = RR + sqrt[ RR x (RR-1) ] where RR is the risk ratio between the treatment group and the comparison group, and sqrt represents square root. In the maternal breastfeeding example, formula-fed infants would be the treatment group, while breastfed infants would be the control group. The E-value calculates the minimum strength of association that an unmeasured confounder would need to have to explain away treatment-outcome association. The higher the E-value, the harder it is to attribute the results of a trial to an unmeasured covariate. We derived this formula by considering all possible associations an unmeasured covariate could have with the treatment and outcome.

In the case of the maternal breastfeeding example, maternal smoking would be the non-measurable covariate. If I plug in our risk ratio number (3.9) into the equation:

E = 3.9+sqrt[ 3.9 x (3.9-1) ]

I calculate an E-value of 7.2, which is quite high. (The lowest possible E-value is 1.) If the E-value was 1 or close to 1, maternal smoking could easily explain away the observed association between breastfeeding and respiratory infection death. However, the fact that the E-value is so high indicates that maternal smoking probably could not play a sufficiently strong role in the causal relationship between formula feeding and infant mortality to explain away a causal effect.

The E-value helps us to interpret how strong the confounding would have to be to explain away our estimate, and thus how much evidence we really have for causality. It is applicable to any unmeasured covariate. In this case, an E-value of 7.2 would indicate that an unmeasured confounder would have to increase the likelihood of breastfeeding and decrease the likelihood of death from a respiratory infection by 7.2-fold each if breastfeeding were to have no casual effect. 

If the initial risk ratio is less than 1, I must first take the inverse before applying the E-value formula; the paper also provides formulas when differences are used instead of risk ratios, and I recommend computing an E-value for the confidence interval as well.

The E-value is easy to use, relatively simple to interpret, and I think could go a long way in making it much easier to assess how much evidence there is for causality (i.e., that the treatment is the cause of an outcome, and not attributed to bias from unmeasured covariates) in an observational study.

The E-value will be useful to data scientists who want to reason about causality. Is your association between the treatment and the outcome causal? To help answer that question, calculate the E-value. I recommend that any observational study trying to assess causality should either report the E-value or else use some other sensitivity analysis technique. I hope that its use becomes routine. Science from observational data would be vastly improved if it did, because it will be qualified with a precise metric, rather than an estimation.

VanderWeele, T.J. and Ding, P. (2017). Sensitivity Analysis in Observational Research: Introducing the E-Value. Annals of Internal Medicine, 167: 268-274.

Found this interesting? Check out some other articles that focus on data science in the biotech space:

The Importance of Data Science in Biotech 

Using Data Science to Guide Drug Development and Predict Disease

Tyler J. VanderWeele, Ph.D.
Author
Tyler J. VanderWeele, Ph.D.

Tyler J. VanderWeele, Ph.D., is a Professor of Epidemiology in the Departments of Epidemiology and Biostatistics at the Harvard T.H. Chan School of Public Health. His research concerns methodology for distinguishing between association and causation in observational studies.