Implicit Gradient Alignment in Distributed and Federated Learning
Yatin Dandi, Luis Barba, Martin Jaggi
[AAAI-22] Main Track
Abstract:
A major obstacle to achieving global convergence in distributed and federated learning is the misalignment of gradients across clients, or mini-batches due to heterogeneity and stochasticity of the distributed data. In this work, we show that data heterogeniety can in fact be exploited to improve generalization performance through implicit regularization.
One way to alleviate the effects of heterogeniety is to encourage the alignment of gradients across different clients throughout training. Our analysis reveals that this goal can be accomplished by utilizing the right optimization method that replicates the implicit regularization effect of SGD, leading to gradient alignment as well as improvements in test accuracies.
Since the existence of this regularization in SGD completely relies on the sequential use of different mini-batches during training, it is inherently absent when training with large mini-batches.
To obtain the generalization benefits of this regularization while increasing parallelism, we propose a novel GradAlign algorithm that induces the same implicit regularization while allowing the use of arbitrarily large batches in each update. We experimentally validate the benefits of our algorithm in different distributed and federated learning settings.
One way to alleviate the effects of heterogeniety is to encourage the alignment of gradients across different clients throughout training. Our analysis reveals that this goal can be accomplished by utilizing the right optimization method that replicates the implicit regularization effect of SGD, leading to gradient alignment as well as improvements in test accuracies.
Since the existence of this regularization in SGD completely relies on the sequential use of different mini-batches during training, it is inherently absent when training with large mini-batches.
To obtain the generalization benefits of this regularization while increasing parallelism, we propose a novel GradAlign algorithm that induces the same implicit regularization while allowing the use of arbitrarily large batches in each update. We experimentally validate the benefits of our algorithm in different distributed and federated learning settings.
Introduction Video
Sessions where this paper appears
-
Poster Session 6
Sat, February 26 8:45 AM - 10:30 AM (+00:00)
Blue 3
-
Poster Session 7
Sat, February 26 4:45 PM - 6:30 PM (+00:00)
Blue 3