Group-robust Sample Reweighting for Subpopulation Shifts via Influence Functions

Part of International Conference on Representation Learning 2025 (ICLR 2025) Conference

Bibtex Paper Supplemental

Authors

Rui Qiao, Zhaoxuan Wu, Jingtan Wang, Pang Wei Koh, Bryan Kian Hsiang Low

Abstract

Machine learning models often have uneven performance among subpopulations(a.k.a., groups) in the data distributions. This poses a significant challenge for themodels to generalize when the proportions of the groups shift during deployment.To improve robustness to such shifts, existing approaches have developed strategiesthat train models or perform hyperparameter tuning using the group-labeled datato minimize the worst-case loss over groups. However, a non-trivial amount ofhigh-quality labels is often required to obtain noticeable improvements. Giventhe costliness of the labels, we propose to adopt a different paradigm to enhancegroup label efficiency: utilizing the group-labeled data as a target set to optimizethe weights of other group-unlabeled data. We introduce Group-robust SampleReweighting (GSR), a two-stage approach that first learns the representations fromgroup-unlabeled data, and then tinkers the model by iteratively retraining its lastlayer on the reweighted data using influence functions. Our GSR is theoreticallysound, practically lightweight, and effective in improving the robustness to sub-population shifts. In particular, GSR outperforms the previous state-of-the-artapproaches that require the same amount or even more group labels. Our code isavailable at https://github.com/qiaoruiyt/GSR.