This paper was accepted at the Trusted Machine Learning for Healthcare workshop at the ICLR 2023 conference.
When analyzing the robustness of predictive models with distribution shifts, many works focus on generalization in the presence of spurious correlations. In this case, one usually uses permutations or environmental indicators to establish independence in the learned models to guarantee generalization under different shifts in the distribution. In this work, we analyze a class of distributional shifts where such dependencies are undesirable because there is a causal relationship between the covariates and the outcomes of interest. This case is common in health care, where covariates can be causally, as opposed to spuriously, associated with outcomes of interest. We formalize this setting and relate it to general distribution shift settings from the literature. We theorize why standard supervised learning and univariate learning will not yield stable predictors in this case, while including causal covariates in the prediction model can restore stability. We demonstrate our theoretical findings in experiments on both synthetic and real data.