Addressing Label Shift in Distributed Learning via Entropy Regularization
Permanent lenke
https://hdl.handle.net/10037/36619Dato
2025-01-22Type
Journal articleTidsskriftartikkel
Peer reviewed
Sammendrag
We address the challenge of minimizing "true risk" in multi-node distributed learning.\footnote{We use the term node to refer to a client, FPGA, APU, CPU, GPU, or worker.} These systems are frequently exposed to both inter-node and intra-node "label shifts", which present a critical obstacle to effectively optimizing model performance while ensuring that data remains confined to each node. To tackle this, we propose the Versatile Robust Label Shift (VRLS) method, which enhances the maximum likelihood estimation of the test-to-train label importance ratio. VRLS incorporates Shannon entropy-based regularization and adjusts the importance ratio during training to better handle label shifts at the test time. In multi-node learning environments, VRLS further extends its capabilities by learning and adapting importance ratios across nodes, effectively mitigating label shifts and improving overall model performance. Experiments conducted on MNIST, Fashion MNIST, and CIFAR-10 demonstrate the effectiveness of VRLS, outperforming baselines by up to 20% in imbalanced settings. These results highlight the significant improvements VRLS offers in addressing label shifts. Our theoretical analysis further supports this by establishing high-probability bounds on estimation errors.
Beskrivelse
Source at https://openreview.net/forum?id=kuYxecnlv2.
Forlag
ICRLSitering
Wu, Choi, Cevher, Ramezani-Kebrya. Addressing Label Shift in Distributed Learning via Entropy Regularization. International Conference on Learning Representations. 2025Metadata
Vis full innførselSamlinger
Copyright 2025 The Author(s)