Newer
Older
\documentclass[runningheads]{llncs}
%
\usepackage[utf8]{inputenc}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
%\usepackage{amsthm}
\usepackage{graphicx}
\usepackage{xcolor}
\usepackage{soul}
\usepackage{hyperref}
\usepackage{algorithm}
\usepackage{dsfont}
\usepackage{caption}
\usepackage{subcaption}
\input{macros}
% Used for displaying a sample figure. If possible, figure files should
% be included in EPS format.
%
% If you use the hyperref package, please uncomment the following line
% to display URLs in blue roman font according to Springer's eBook style:
% \renewcommand\UrlFont{\color{blue}\rmfamily}
\begin{document}
%
%\title{D-Cliques: Topology can compensate NonIIDness in Decentralized Federated Learning}
\title{D-Cliques: Compensating NonIIDness in Decentralized Federated Learning
with Topology}
%
\titlerunning{D-Cliques}
% If the paper title is too long for the running head, you can set
% an abbreviated paper title here
%
%\author{Aur\'elien Bellet\inst{1}\thanks{Authors in alphabetical order of last names, see Section 'Credits' for respective contributions.} \and
%Anne-Marie Kermarrec\inst{2} \and
%Erick Lavoie\inst{2}}
%%
%\authorrunning{A. Bellet, A-M. Kermarrec, E. Lavoie}
%% First names are abbreviated in the running head.
%% If there are more than two authors, 'et al.' is used.
%%
%\institute{Inria, Lille, France\\
%\email{aurelien.bellet@inria.fr} \and
%EPFL, Lausanne, Switzerland \\
%\email{\{anne-marie.kermarrec,erick.lavoie\}@epfl.ch}\\
%}
%
\maketitle % typeset the header of the contribution
%
\begin{abstract}
The convergence speed of machine learning models trained with Federated
Learning is significantly affected by non-independent and identically
distributed (non-IID) data partitions, even more so in a fully decentralized
setting without a central server. In this paper, we show that the impact of
\textit{local class bias} can be significantly reduced by carefully designing
the underlying communication topology. We present D-Cliques, a novel topology
that reduces gradient bias by grouping nodes in interconnected cliques such
that the local joint distribution in a clique is representative of the global
class distribution. We also show how to adapt the updates of decentralized SGD
to obtain unbiased gradients and effective momentum with D-Cliques. Our
empirical evaluation on MNIST and CIFAR10 demonstrates that our approach
provides similar convergence speed as a fully-connected topology with a
significant reduction in the number of edges and messages. In a 1000-node
topology, D-Cliques requires 98\% less edges and 96\% less total messages,
with further possible gains using a small-world topology across cliques.
% Our study paves the way for tackling more general types of data non-IIDness
% through the design of appropriate topologies.
\keywords{Decentralized Learning \and Federated Learning \and Topology \and
Non-IID Data \and Stochastic Gradient Descent}
% 1/ Decentralized FL approaches can be more scalable than Centralized FL approach when the number of nodes is large
% 2/ It is well known the topology can affect convergence of decentralized algorithms, as shown by classic convergence analysis. However the effect of topology has been observed to be often quite small in practice. This is because most of these results were obtained for iid data.
% 3/ In this paper, we show that the effect of topology is very significant for non-iid data. Unlike for centralized FL approaches, this happens even when nodes perform a single local update before averaging. We propose an approach to design a sparse data-aware topology which recovers the convergence speed of a centralized approach.
% 4/ An originality of our approach is to work at the topology level without changing the original efficient and simple D-SGD algorithm. Other work to mitigate the effect of non-iid on decentralized algorithms are based on performing modified updates (eg with variance reduction) or multiple averaging steps.
Machine learning is currently shifting from a \emph{centralized}
paradigm, in which models are trained on data located on a single machine or
in a data center, to \emph{decentralized} ones.
Effectively, the latter paradigm closely matches the natural data distribution
in the numerous use-cases where data is collected and processed by several
independent
parties (hospitals, companies, personal devices...).
Federated Learning (FL) allows a set
data while keeping it where it has been produced. Not only does this avoid
the costs of moving data, but it also mitigates privacy and confidentiality concerns~\cite{kairouz2019advances}.
Yet, such data distribution introduces new challenges for learning systems, as
local datasets
reflect the usage and production patterns specific to each participant: they are
\emph{not} independent and identically distributed
(non-IID). More specifically, the relative frequency of different classes of examples may significantly vary
across local datasets \cite{quagmire}.
Therefore, one of the key challenges in FL is to design algorithms that
can efficiently deal with such non-IID data
\cite{kairouz2019advances,fedprox,scaffold,quagmire}.
Federated learning algorithms can be classified into two categories depending
on the underlying network topology they run on. In server-based FL, the
network is organized according to a star topology: a central server orchestrates the training process by
iteratively aggregating model updates received from the participants
(\emph{clients}) and sending
them back the aggregated model \cite{mcmahan2016communication}. In contrast,
fully decentralized FL algorithms operate over an arbitrary network topology
in the network. A classic example of such algorithms is Decentralized
SGD (D-SGD) \cite{lian2017d-psgd}, in which participants alternate between
local SGD updates and model averaging with neighboring nodes.
In this paper, we focus on fully decentralized algorithms as they can
generally scale better to the large number of participants seen in ``cross-device''
applications \cite{kairouz2019advances}. Effectively, while a central
server may quickly become a bottleneck as the number of participants increases, the topology used in fully decentralized algorithms can remain sparse
enough such that all participants need only to communicate with a small number of other participants, i.e. nodes have small (constant or logarithmic) degree
\cite{lian2017d-psgd}. For IID data, recent work has shown both empirically
\cite{lian2017d-psgd,Lian2018} and theoretically \cite{neglia2020} that sparse
topologies like rings or grids do not significantly affect the convergence
% We also note that full decentralization can also provide benefits in terms of
% privacy protection \cite{amp_dec}.
% From directory results/mnist
% python ../../../learn-topology/tools/plot_convergence.py ring/iid/all/2021-03-30-16:07:06-CEST ring/non-iid/all/2021-03-30-16:07:03-CEST --add-min-max --legend 'lower right' --yaxis test-accuracy --labels '100 nodes IID' '100 nodes non-IID' --save-figure ../../figures/ring-IID-vs-non-IID.png --font-size 20
\begin{subfigure}[b]{0.31\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/ring-IID-vs-non-IID}
\quad
% From directory results/mnist
% python ../../../learn-topology/tools/plot_convergence.py grid/iid/all/2021-03-30-16:07:01-CEST grid/non-iid/all/2021-03-30-16:06:59-CEST --add-min-max --legend 'lower right' --yaxis test-accuracy --labels '100 nodes IID' '100 nodes non-IID' --save-figure ../../figures/grid-IID-vs-non-IID.png --font-size 20
\begin{subfigure}[b]{0.31\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/grid-IID-vs-non-IID}
\quad
% From directory results/mnist
% python ../../../learn-topology/tools/plot_convergence.py fully-connected/iid/all/2021-03-30-16:07:20-CEST fully-connected/all/2021-03-10-09:25:19-CET --add-min-max --legend 'lower right' --yaxis test-accuracy --labels '100 nodes IID' '100 nodes non-IID' --save-figure ../../figures/fully-connected-IID-vs-non-IID.png --font-size 20
\begin{subfigure}[b]{0.31\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/fully-connected-IID-vs-non-IID}
\caption{\label{fig:fully-connected-IID-vs-non-IID} Fully-connected}
\caption{IID vs non-IID convergence speed of decentralized SGD for
logistic regression on
MNIST for different topologies. Bold lines show the average test
accuracy across nodes
while thin lines show the minimum
and maximum accuracy of individual nodes. While the effect of topology
is negligible for IID data, it is very significant in the
non-IID case. When fully-connected, both cases converge similarly. See
Section~\ref{section:experimental-settings} for details on
the experimental setup.}
In contrast to the IID case however, our experiments demonstrate that \emph{the impact of topology is extremely significant for non-IID data}. This phenomenon is illustrated
in Figure~\ref{fig:iid-vs-non-iid-problem}: we observe that a ring or
a grid topology clearly jeopardizes the convergence speed as local
distributions do not have relative frequency of classes similar to the global
distribution, i.e. they have \textit{local class bias}. We stress the fact that, unlike in centralized FL
\cite{kairouz2019advances,scaffold,quagmire}, this
happens even when nodes perform a single local update before averaging the
model with their neighbors. In this paper, we address the following question:
% \textit{Are there regular topologies, i.e. where all nodes have similar or the same number of neighbours, with less connections than a fully-connected graph that retain a similar convergence speed and non-IID behaviour?}
%\textit{Are there sparse topologies with similar convergence speed as the fully connected graph under a large number of participants with local class bias?}
\textit{Can we design sparse topologies with convergence
speed similar to the one obtained in a fully connected network under
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
a large number of participants with local class bias?}
%AMK: do we talk about local class bias or noniidness?
%Indeed, as we show with the following contributions:
In this paper, we make the following contributions:
(1) We propose D-Cliques, a sparse topology in which nodes are organized in
interconnected cliques, i.e. locally fully-connected sets of nodes, such that
the joint data distribution of each clique is representative of the global
(IID) distribution; (2) We propose Clique Averaging, a modified version of
the standard D-SGD algorithm which decouples gradient averaging, used for
optimizing local models, from distributed averaging, used to ensure all models
converge, therefore reducing the bias introduced by inter-clique connections;
(3) We show how Clique Averaging can be used to implement unbiased momentum
that would otherwise be detrimental in a non-IID setting; (4) We demonstrate
through an extensive experimental study that our approach removes the effect
of the local class bias both for the MNIST~\cite{mnistWebsite} and CIFAR10~
\cite{krizhevsky2009learning} datasets, for training a linear model and a deep
convolutional network; (5) Finally, we demonstrate the scalability of our
approach by considering up to 1000 node networks, in contrast to most
previous work on fully decentralized learning that considers only a few tens
of participants~\cite{tang18a,neglia2020,momentum_noniid,cross_gradient,consensus_distance}.
%we show that these results hold up to 1000 participants, in contrast to most previous work on fully decentralized algorithms that considers only a few tens of participants \cite{tang18a,more_refs}.
For instance, our results show that using D-Cliques in a 1000-node network
requires 98\% less edges ($18.9$ vs $999$ edges per participant on average) thus yielding a 96\% reduction in the total number of required messages (37.8 messages per round per node on average instead of 999) to obtain a similar convergence speed as a fully-connected topology. Furthermore an additional 22\% improvement
% (14.5 edges per node on average instead of 18.9)
is possible when using a small-world inter-clique topology, with further potential gains at larger scales because of its linear-logarithmic scaling.
The rest of this paper is organized as follows. We first present the problem
statement and our methodology (Section~\ref{section:problem}). The D-Cliques
design is presented in Section~\ref{section:d-cliques}) along with an
empirical illustration of its benefits. In
Section~\ref{section:clique-averaging-momentum}, we
show how to further reduce bias with Clique Averaging and how to use it to
implement momentum. We present the results or our extensive experimental
study in Section~\ref{section:non-clustered}. We review some related work in
Section~\ref{section:related-work}), and conclude with promising directions
for future work in Section~\ref{section:conclusion}.
%When % then explain how to construct D-Cliques and show their benefits (Section~\ref{section:d-cliques}). We show how to further reduce bias with Clique Averaging (Section~\ref{section:clique-averaging}). We then show how to use Clique Averaging to implement momentum (Section~\ref{section:momentum}). Having shown the effectiveness of D-Cliques, we evaluate the importance of clustering (Section~\ref{section:non-clustered}), and full intra-clique connections (Section~\ref{section:intra-clique-connectivity}). Having established the design, we then study how best to scale it (Section~\ref{section:interclique-topologies}). We conclude with a survey of related work (Section~\ref{section:related-work}) and a brief summary of the paper (Section~\ref{section:conclusion}).
%\footnotetext{This is different from the accuracy of the average model across nodes that is sometimes used once training is completed.}
\section{Problem Statement}
\label{section:problem}
We consider a set of $n$ nodes $N = \{1, \dots, n \}$ seeking to
collaboratively solve a classification task with $c$ classes.
% where each node can communicate with its neighbours according to the mixing matrix $W$ in which $W_{ij}$ defines the \textit{weight} of the outgoing connection from node $i$ to $j$. $W_{ij} = 0$ means that there is no connection from node $i$ to $j$ and $W_{ij} > 0$ means there is a connection.
%AMK:explain the weight
%Training data is sampled from a global distribution $D$ unknown to the nodes.
%AMK:Removed the sentence above
Each node has access to a local dataset that
follows its own local distribution $D_i$. The goal is to find a global model
$x$ that performs well on the union of the local distributions by minimizing
the average training loss:
\min_{x} \frac{1}{n}\sum_{i=1}^{n} \mathds{E}_
{s_i \sim D_i} [F_i(x;s_i)],
aurelien.bellet
committed
where $s_i$ is a data example drawn from $D_i$ and $F_i$ is the loss function
on node $i$. Therefore, $\mathds{E}_{s_i \sim D_i} F_i(x;s_i)$ denotes the
aurelien.bellet
committed
expected loss of model $x$ on a random example $s_i$ drawn from $D_i$.
To collaboratively solve Problem \eqref{eq:dist-optimization-problem}, each
node can exchange messages with its neighbors in an undirected network graph
$G(N,E)$ where $\{i,j\}\in E$ denotes an edge (communication channel)
between nodes $i$ and $j$.
%AMK: if we need space this could be a paragraph
In this work, we use the popular Decentralized Stochastic
Gradient Descent algorithm, aka D-SGD~\cite{lian2017d-psgd}. As
shown in Algorithm~\ref{Algorithm:D-PSGD},
%AMK: can we say why: most popular, most efficient ?
a single iteration of D-SGD at node $i$ consists of sampling a mini-batch
$D_i$, updating its local model $x_i$ by taking a stochastic gradient descent
aurelien.bellet
committed
(SGD) step according to the mini-batch, and performing a weighted average of
its local model with those of its
This weighted average is defined by a
mixing matrix $W$, in which $W_{ij}$ corresponds to the weight of
the outgoing connection from node $i$ to $j$ and $W_{ij} = 0$ for $
\{i,j\}\notin
E$. To ensure that the local models converge on average to a (local) optimum
of Problem
\eqref{eq:dist-optimization-problem}, $W$
must be doubly
stochastic ($\sum_{j \in N} W_{ij} = 1$ and $\sum_{j \in N} W_{ji} = 1$) and
symmetric, i.e. $W_{ij} = W_{ji}$, see \cite{lian2017d-psgd}.
aurelien.bellet
committed
learning rate $\gamma$, mixing weights $W$, mini-batch size $m$,
number of steps $K$
aurelien.bellet
committed
\State $s_i^{(k)} \gets \text{mini-batch sample of size $m$ drawn
from~} D_i$
\State $x_i^{(k-\frac{1}{2})} \gets x_i^{(k-1)} - \gamma \nabla F(x_i^{(k-1)}; s_i^{(k)})$
\State $x_i^{(k)} \gets \sum_{j \in N} W_{ji}^{(k)} x_j^{(k-\frac{1}{2})}$
\EndFor
\end{algorithmic}
\end{algorithm}
As demonstrated in Figure~\ref{fig:iid-vs-non-iid-problem}, lifting the
assumption of IID data significantly challenges the learning algorithm. In
this paper, we focus on an \textit{extreme case of local class bias}: we
aurelien.bellet
committed
consider that each node only has examples
from a single class.
% Our results should generalize to lesser, and more
% frequent, cases.
%AMK: a bit weak can't we say our results generalize....
%: e.g., if some classes are globally less represented, the position of the nodes with the rarest classes will be significant; and if two local datasets have different number of examples, the examples in the smaller dataset may be visited more often than those in a larger dataset, skewing the optimization process.
To isolate the effect of local class bias from other potentially compounding
factors, we make the following simplifying assumptions: (1) All classes are
aurelien.bellet
committed
equally represented in the global dataset; (2) All classes are represented on
the same number of nodes; (3) All nodes have the same number of examples.
We believe that these assumptions are reasonable in the context of our study
because: (1)
Global class
imbalance equally
affects the optimization process on a single node and is therefore not
specific to the decentralized setting; (2) Our results do not exploit specific
positions in the topology; (3) Imbalanced dataset sizes across nodes can be
addressed for instance by appropriately weighting the individual loss
functions.
% with less examples could
% simply skip some rounds until the nodes with more examples catch up.
Our results can be extended to support additional compounding factors in future work.
\subsubsection{Experimental setup.}
\label{section:experimental-settings}
%AMK: j'aurais mis ca dans la section eval car je n'aurais pas mélangé design et eval.
Our main goal is to provide a fair comparison of the convergence speed across
different topologies and algorithmic variations, in order to
show that our approach
can remove much of the effect of local class bias.
We experiment with two datasets: MNIST~\cite{mnistWebsite} and
CIFAR10~\cite{krizhevsky2009learning}, which both have $c=10$ classes.
aurelien.bellet
committed
For MNIST, we use 45k and 10k examples from the original 60k
training set for training and validation respectively. The remaining 5k
aurelien.bellet
committed
training examples were randomly removed to ensure all 10 classes are balanced
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
while ensuring the dataset is evenly divisible across 100 and 1000 nodes.
We use all 10k examples of
the test set to measure test accuracy. For CIFAR10, classes are evenly
balanced: we use 45k/50k images of the original training set for training,
5k/50k for validation, and all 10k examples of the test set for measuring
prediction accuracy.
We
use a logistic regression classifier for MNIST, which
provides up to 92.5\% percent accuracy in the centralized setting.
% compared to
% $99\%$ for the state-of-the-art~\cite{mnistWebsite}.
For CIFAR10, we use a Group-Normalized variant of LeNet~\cite{quagmire}, a
deep convolutional network which achieves an accuracy of $72.3\%$ in the
centralized setting.
% compared to the 99\% achieved by start-of-the-art.
These models are thus reasonably accurate (which is sufficient to
study the effect of the topology) while being sufficiently fast to train in a
fully decentralized setting, and are simple enough to configure and analyze.
Regarding hyper-parameters, we jointly optimized the learning rate and
mini-batch size on the
validation set for 100 nodes, obtaining respectively $0.1$ and $128$ for
MNIST and $0.002$ and $20$ for CIFAR10.
For CIFAR10, we additionally use a momentum of $0.9$.
We evaluate 100- and 1000-node networks by creating multiple models in memory and simulating the exchange of messages between nodes.
To ignore the impact of distributed execution strategies and system
optimization techniques, we report the test accuracy of all nodes (min, max,
average) as a function of the number of times each example of the dataset has
been sampled by a node, i.e. an \textit{epoch}. This is equivalent to the classic case of a single node sampling the full distribution.
To further make results comparable across different number of nodes, we lower
the batch size proportionally to the number of nodes added, and inversely,
e.g. on MNIST, 12800 with 1 node, 128 with 100 nodes, 13 with 1000 nodes. This
ensures the same number of model updates and averaging per epoch, which is is
important to have a fair comparison.\footnote{Updating and averaging models
after every example can eliminate the impact of local class bias. However, the
resulting communication overhead is impractical.}
Finally, we compare our results against an ideal baseline: either a
fully-connected network topology with the same number of nodes or a single IID
node. In both approaches, the topology has no effect on
the optimization. For a certain choice of number of nodes and
mini-batch size, both approaches are equivalent. %ensure a single
% model is optimized, which therefore removes the effect of the topology. While, both approaches compute an equivalent gradient with the same expectation, we favored using a single IID node for CIFAR10 for the sake of training speed.
Erick Lavoie
committed
\section{D-Cliques: Creating Locally Representative Cliques}
\label{section:d-cliques}
Erick Lavoie
committed
In this section, we present the design of D-Cliques. To give an intuition of our approach, let us consider the neighborhood of a single node in a grid similar to that of Figure~\ref{fig:grid-IID-vs-non-IID}, represented on Figure~\ref{fig:grid-iid-vs-non-iid-neighbourhood}.
% where each color represents a class of data.
The colors of a node represent the different classes it holds
locally. In the IID setting (Figure~\ref{fig:grid-iid-neighbourhood}), each
aurelien.bellet
committed
node has examples of all classes in equal proportions. In the non-IID setting
(Figure~\ref{fig:grid-non-iid-neighbourhood}), each node has examples of only
a
single class and nodes are distributed randomly in the grid. A single training step, from the point of view of the center node, is equivalent to sampling a mini-batch five times larger from the union of the local distributions of all illustrated nodes.
In the IID case, since gradients are computed from examples of all classes,
the resulting average gradient points in a direction that reduces the
loss across all classes. In contrast, in the non-IID case, only a subset of classes are
represented in the immediate neighborhood of the node and the gradients will
be biased towards these classes. % more than in the IID case.
Importantly, as the distributed averaging algorithm takes several steps to
converge, this variance persists across iterations as the locally computed
gradients are far from the global average.\footnote{It is possible, but
very costly, to mitigate this by performing a sufficiently large number of
averaging steps between each gradient step.} This can significantly slow down
convergence speed to the point of making decentralized optimization
impractical.
%For an intuition on the effect of local class bias, examine the neighbourhood of a single node in a grid similar to that of Figure~\ref{fig:grid-IID-vs-non-IID}. As illustrated in Figure~\ref{fig:grid-iid-vs-non-iid-neighbourhood}, the color of a node, represented as a circle, corresponds to a different class. In the IID setting (Figure~\ref{fig:grid-iid-neighbourhood}), each node has examples of all classes in equal proportions. In the non-IID setting (Figure~\ref{fig:grid-non-iid-neighbourhood}), each node has examples of only a single class and nodes are distributed randomly in the grid. A single training step, from the point of view of the middle node, is equivalent to sampling a mini-batch five times larger from the union of the local distributions of all illustrated nodes.
Erick Lavoie
committed
Erick Lavoie
committed
\centering
Erick Lavoie
committed
\centering
\includegraphics[width=\textwidth]{figures/grid-iid-neighbourhood}
\caption{\label{fig:grid-iid-neighbourhood} IID}
\end{subfigure}
Erick Lavoie
committed
\centering
\includegraphics[width=\textwidth]{figures/grid-non-iid-neighbourhood}
\caption{\label{fig:grid-non-iid-neighbourhood} Non-IID}
\end{subfigure}
Erick Lavoie
committed
\label{fig:grid-iid-vs-non-iid-neighbourhood}
\end{figure}
In D-Cliques, we address the issues of non-iidness by carefully designing a
network topology composed of \textit{cliques} and \textit{inter-clique
connections}:
\item D-Cliques recovers a balanced representation of classes, similar to
that of the IID case, by constructing a topology such that each node is
part of a \textit{clique} with neighbors representing all classes.
\item To ensure a global consensus and convergence,
\textit{inter-clique connections}
are introduced by connecting a small number of node pairs that are
part of different cliques.
In the following, we introduce one inter-clique connection per node such that each clique has exactly one
edge with all other cliques, see Figure~\ref{fig:d-cliques-figure} for the
corresponding D-Cliques network in the case of $n=100$ nodes and $c=10$
classes. We will explore sparser inter-clique topologies in Section~\ref{section:interclique-topologies}.
The mixing matrix $W$ required by D-SGD is obtained from the above
topology using standard
Metropolis-Hasting weights~\cite{xiao2004fast}:
\begin{equation}
W_{ij} = \begin{cases}
\frac{1}{\max(\text{degree}(i), \text{degree}(j)) + 1} & \text{if}~i \neq
j \text{ and } \{i,j\}\in E,\\
1 - \sum_{j \neq i} W_{ij} & \text{if}~$i = j$, \\
0 & \text{otherwise}.
Erick Lavoie
committed
We refer to Algorithm~\ref{Algorithm:D-Clique-Construction} in the appendix
for a formal account of D-Cliques construction. We note that it only requires
the knowledge of the local class distribution at each node. For the sake of
simplicity, we assume that D-Cliques is constructed from the global
knowledge of these distributions, which can easily be obtained by
decentralized averaging in a pre-processing step.
The key idea of D-Cliques is that because the clique-level distribution $D_{
\textit{clique}} = \sum_{i
\in \textit{clique}} D_i$ is representative of the global distribution,
the local models of nodes across cliques remain rather close. Therefore, a
sparse inter-clique topology can be used, significantly reducing the total
number of edges without slowing down the convergence. Furthermore, the degree
of each node in the network remains low and even, making the D-Cliques
topology very well-suited to decentralized federated learning.
%We centrally generate the topology, which is then tested in a custom simulator. We expect our approach should be straightforward to adapt for a decentralized execution: the presence and relative frequency of global classes could be computed using PushSum~\cite{kempe2003gossip}, and neighbours could be selected with PeerSampling~\cite{jelasity2007gossip}.
\centering
\includegraphics[width=\textwidth]{figures/fully-connected-cliques}
\caption{\label{fig:d-cliques-figure} D-Cliques (fully-connected
cliques)}
\end{subfigure}
\hfill
% To regenerate figure, from results/mnist
% python ../../../learn-topology/tools/plot_convergence.py fully-connected/all/2021-03-10-09:25:19-CET no-init-no-clique-avg/fully-connected-cliques/all/2021-03-12-11:12:49-CET --add-min-max --yaxis test-accuracy --ymin 80 --ymax 92.5 --labels '100 nodes non-IID fully-connected' '100 nodes non-IID d-cliques' --save-figure ../../figures/d-cliques-mnist-vs-fully-connected.png --legend 'lower right' --font-size 16
\centering
\includegraphics[width=\textwidth]{figures/d-cliques-mnist-vs-fully-connected.png}
\caption{\label{fig:d-cliques-example-convergence-speed} Convergence Speed
on MNIST}
\caption{\label{fig:d-cliques-example} D-Cliques topology and convergence
speed on MNIST.}
Figure~\ref{fig:d-cliques-example-convergence-speed} illustrates the
performance D-Cliques on MNIST with $n=100$ nodes. The convergence speed is
very close
to that of a fully-connected topology, and significantly better than with
a ring or a grid (see Figure~\ref{fig:iid-vs-non-iid-problem}). With
100 nodes, it offers a reduction of $\approx90\%$ in the number of edges
compared to a fully-connected topology. Nonetheless, there is still
significant variance in the accuracy across nodes, which we address in
the next section by removing the bias introduced by inter-clique edges.
%The degree of \textit{skew} of local distributions $D_i$, i.e. how much the local distribution deviates from the global distribution on each node, influences the minimal size of cliques.
%
%The global distribution of classes, for classification tasks, can be computed from the distribution of class examples on the nodes, with Distributed Averaging (CITE). Given the global distribution of classes, neighbours within cliques can be chosen based on a PeerSampling (CITE) service. Both services can be implemented such that they converge in a logarithmic number of steps compared to the number of nodes. It is therefore possible to obtain this information in a scalable way.
%
% In the rest of this paper, we assume these services are available and show that the approach provides a useful convergence speed after the cliques have been formed.
In this section we present Clique Averaging, a simple modification of D-SGD
which removes the bias caused by the inter-cliques edges of
D-Cliques, and show how this can be used to successfully implement momentum
for non-IID data.
\subsection{Clique Averaging: Debiasing Gradients from Inter-Clique Edges}
While limiting the number of inter-clique connections reduces the
amount of data traveling on the network, it can also introduce some bias.
Figure~\ref{fig:connected-cliques-bias} illustrates the problem with the
simplest case of two cliques connected by one inter-clique edge (here,
between the green node of the left clique and the purple node of the right
clique). Let us focus on node A. With Metropolis-Hasting, node A's self-weight
is $\frac{12}
{110}$, the weight between A and the green node connected to B is
$\frac{10}{110}$, and
all other neighbors of A have a weight of $\frac{11}{110}$. Therefore, the
gradient at A is biased towards its own class (purple) and against the green
class. The same holds for all other nodes
without inter-clique edges with respect to their respective classes. For node
B, all its edge weights (including its self-weight) are equal to $\frac{1}
{11}$. However, the green class is represented twice (once as a clique
neighbor and once from the inter-clique edge), while all other classes are
represented only once. This biases the gradient toward the green class. The
combined effect of these two sources of bias is to increase the variance
of the local models across nodes.
\includegraphics[width=0.5\textwidth]{figures/connected-cliques-bias}
\caption{\label{fig:connected-cliques-bias} Illustrating the bias induced by
inter-clique connections (see main text).}
aurelien.bellet
committed
We address this problem by adding \emph{Clique Averaging} to D-SGD
(Algorithm~\ref{Algorithm:Clique-Unbiased-D-PSGD}), which essentially
decouples gradient averaging from model averaging. Only the gradients of
neighbors within the same clique are used to compute the average gradient,
providing an equal representation to all classes. In contrast, all neighbors'
models, including those across inter-clique edges, participate in the model
averaging step as in the original version.
aurelien.bellet
committed
\caption{D-SGD with Clique Averaging, Node $i$}
\label{Algorithm:Clique-Unbiased-D-PSGD}
\begin{algorithmic}[1]
aurelien.bellet
committed
\State \textbf{Require} initial model parameters $x_i^{(0)}$, learning
rate $\gamma$, mixing weights $W$, mini-batch size $m$, number of
steps $K$
aurelien.bellet
committed
\State $s_i^{(k)} \gets \text{mini-batch sample of size $m$ drawn
from~} D_i$
\State $g_i^{(k)} \gets \frac{1}{|\textit{Clique}(i)|}\sum_{j \in \textit{Clique(i)}} \nabla F(x_j^{(k-1)}; s_j^{(k)})$
\State $x_i^{(k-\frac{1}{2})} \gets x_i^{(k-1)} - \gamma g_i^{(k)}$
\State $x_i^{(k)} \gets \sum_{j \in N} W_{ji}^{(k)} x_j^{(k-\frac{1}{2})}$
\EndFor
\end{algorithmic}
\end{algorithm}
% To regenerate figure, from results/mnist:
% python ../../../learn-topology/tools/plot_convergence.py fully-connected/all/2021-03-10-09:25:19-CET no-init-no-clique-avg/fully-connected-cliques/all/2021-03-12-11:12:49-CET no-init/fully-connected-cliques/all/2021-03-12-11:12:01-CET --add-min-max --yaxis test-accuracy --labels '100 nodes non-IID fully-connected' '100 nodes non-IID d-cliques w/o clique avg.' '100 nodes non-IID w/ clique avg.' --legend 'lower right' --ymin 89 --ymax 92.5 --font-size 13 --save-figure ../../figures/d-clique-mnist-clique-avg.png
\centering
\includegraphics[width=0.55\textwidth]{figures/d-clique-mnist-clique-avg}
\caption{\label{fig:d-clique-mnist-clique-avg} Effect of Clique Averaging on MNIST. Y-axis starts at 89.}
As illustrated in Figure~\ref{fig:d-clique-mnist-clique-avg}, this
significantly reduces the variance of models across nodes and accelerates
convergence speed to reach the same level as the one obtained with a
fully-connected topology. There is a small additional cost, since gradients
and models need to be sent in two separate rounds of messages. Nonetheless, compared to fully connecting all nodes, the total number of messages is reduced by $\approx 80\%$.
\subsection{Implementing Momentum with Clique Averaging}
Efficiently training higher capacity models usually requires additional
optimization techniques. In particular, momentum~\cite{pmlr-v28-sutskever13}
increases the magnitude of the components of the gradient that are shared
between several consecutive steps, and is critical for deep convolutional networks like
LeNet~\cite{lecun1998gradient,quagmire} to converge quickly. However, a direct
application of momentum in a non-IID setting can actually be very detrimental.
As illustrated in Figure~\ref{fig:d-cliques-cifar10-momentum-non-iid-effect}
for the case of LeNet on CIFAR10 with 100 nodes, D-Cliques with momentum
actually fails to converge. Not using momentum actually gives a faster
convergence, but there is a significant gap compared to the case of a single
IID node using momentum.
\centering
% To regenerate figure, from results/cifar10
% python ../../../learn-topology/tools/plot_convergence.py 1-node-iid/all/2021-03-10-13:52:58-CET no-init-no-clique-avg/fully-connected-cliques/all/2021-03-13-18:34:35-CET no-init-no-clique-avg-no-momentum/fully-connected-cliques/all/2021-03-26-13:47:35-CET/ --legend 'upper right' --add-min-max --labels '1-node IID w/ momentum' '100 nodes non-IID d-cliques w/ momentum' '100 nodes non-IID d-cliques w/o momentum' --font-size 14 --yaxis test-accuracy --save-figure ../../figures/d-cliques-cifar10-momentum-non-iid-effect.png --ymax 100
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/d-cliques-cifar10-momentum-non-iid-effect}
\caption{\label{fig:d-cliques-cifar10-momentum-non-iid-effect} Without Clique Averaging }
\end{subfigure}
\hfill
% To regenerate figure, from results/cifar10
% python ../../../learn-topology/tools/plot_convergence.py 1-node-iid/all/2021-03-10-13:52:58-CET no-init/fully-connected-cliques/all/2021-03-13-18:32:55-CET --legend 'upper right' --add-min-max --labels '1-node IID w/ momentum' '100 nodes non-IID d-clique w/ momentum' --font-size 14 --yaxis test-accuracy --save-figure ../../figures/d-cliques-cifar10-momentum-non-iid-clique-avg-effect.png --ymax 100
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/d-cliques-cifar10-momentum-non-iid-clique-avg-effect}
\caption{\label{fig:d-cliques-cifar10-momentum-non-iid-clique-avg-effect} With Clique Averaging}
\end{subfigure}
\caption{\label{fig:cifar10-momentum} Non-IID Effect of Momentum on CIFAR10 with LeNet}
We show here that Clique Averaging (Section~\ref{section:clique-averaging})
allows us to compute an unbiased momentum from the
unbiased average gradient $g_i^{(k)}$ of Algorithm~\ref{Algorithm:Clique-Unbiased-D-PSGD}:
\begin{equation}
v_i^{(k)} \leftarrow m v_i^{(k-1)} + g_i^{(k)}
\end{equation}
It then suffices to modify the original gradient step to use momentum:
\begin{equation}
x_i^{(k-\frac{1}{2})} \leftarrow x_i^{(k-1)} - \gamma v_i^{(k)}
\end{equation}
As shown in Figure~
\ref{fig:d-cliques-cifar10-momentum-non-iid-clique-avg-effect}, this
simple modification restores the benefits of momentum and closes the gap
with the centralized setting.
\label{section:non-clustered}
%AMK: add what is in there
In this section, we first compare D-Cliques to alternative topologies to
confirm our main design choices. Then,
we evaluate some extensions of D-Cliques to further reduce the number of
inter-clique connections so as to scale even better with the number of nodes.
\subsection{Comparing D-Cliques to Other Sparse Topologies} %Non-Clustered
% Topologies}
%We now show, in this section and the next, that the particular structure of D-Cliques is necessary. \label{section:non-clustered}
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
We demonstrate the advantages of D-cliques over alternative sparse topologies
with a
similar number of edges. First, we consider topologies where the neighbors
of each node are selected at random without any clique structure.
Specifically, for $n=100$ nodes, we
construct a random topology such that each node has exactly 10 edges, which is
similar to the average 9.9 edges for our previous D-Cliques example
(Fig.~\ref{fig:d-cliques-figure}). To better understand the importance of the
clique structure independently of the class representativity among neighbors,
we also compare to a similar random topology
where edges are
chosen such that each node has neighbors of all possible classes. Finally, we
also implement an analog of Clique Averaging for these random topologies,
where all nodes de-bias their gradient with that of their neighbors. In the
latter case, since nodes do not form a clique, none of the nodes actually
computes the same resulting average gradient.
The results for MNIST and CIFAR10 are shown in
Figure~\ref{fig:d-cliques-comparison-to-non-clustered-topologies}. For MNIST,
a purely random topology has higher variance and lower convergence speed than
D-Cliques, with or without Clique Averaging, while a random topology with
class representativity performs similarly as D-Cliques without Clique
Averaging. However and perhaps surprisingly, a random topology with unbiased
gradient performs slightly worse than without it. In any case, D-Cliques with
Clique Averaging performs better than any other random topology, showing
that the clique structure has a small but significant effect in this setup.
% To regenerate the figure, from directory results/mnist
% python ../../../learn-topology/tools/plot_convergence.py fully-connected-cliques/all/2021-03-10-10:19:44-CET no-init-no-clique-avg/fully-connected-cliques/all/2021-03-12-11:12:49-CET random-10/all/2021-03-17-20:28:12-CET random-10-diverse/all/2021-03-17-20:28:35-CET --labels 'd-clique (fcc)' 'd-clique (fcc) no clique avg.' '10 random edges' '10 random edges (all classes represented)' --add-min-max --legend 'lower right' --ymin 88 --ymax 92.5 --yaxis test-accuracy --save-figure ../../figures/d-cliques-mnist-linear-comparison-to-non-clustered-topologies.png --font-size 13
\includegraphics[width=\textwidth]{figures/d-cliques-mnist-linear-comparison-to-non-clustered-topologies}
\caption{MNIST with Linear Model}
\end{subfigure}
\hfill
% To regenerate the figure, from directory results/cifar10
% python ../../../learn-topology/tools/plot_convergence.py no-init/fully-connected-cliques/all/2021-03-13-18:32:55-CET no-init-no-clique-avg/fully-connected-cliques/all/2021-03-13-18:34:35-CET random-10/all/2021-03-17-20:30:03-CET random-10-diverse/all/2021-03-17-20:30:41-CET random-10-diverse-unbiased-gradient/all/2021-03-17-20:31:14-CET --labels 'd-clique (fcc) clique avg.' 'd-clique (fcc) no clique avg.' '10 random edges' '10 random edges (all classes repr.)' '10 random (all classes repr.) with unbiased grad.' --add-min-max --legend 'upper left' --yaxis test-accuracy --save-figure ../../figures/d-cliques-cifar10-linear-comparison-to-non-clustered-topologies.png --ymax 119 --font-size 13
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/d-cliques-cifar10-linear-comparison-to-non-clustered-topologies}
\caption{CIFAR10 with LeNet}
\end{subfigure}
\caption{\label{fig:d-cliques-comparison-to-non-clustered-topologies} Comparison to Non-Clustered Topologies}
\end{figure}
On the harder CIFAR10 dataset, the differences are much more dramatic:
D-Cliques with Clique Averaging and momentum is critical for good convergence.
Crucially, all random topologies fail to converge to a good solution. This
confirms that our clique structure is important to reduce variance
across nodes and improve the convergence. The difference with the previous
experiment seems to be due to both the use of a higher capacity model with
local optima and to the intrinsic characteristics of the datasets. We refer
to the appendix for results on MNIST with LeNet.
% We have tried to use LeNet on
% MNIST to see if the difference between MNIST and CIFAR10 could be attributed to the capacity difference between the Linear and Convolutional networks, whose optimization may benefit from clustering (see Appendix). The difference is less dramatic than for CIFAR10, so it must be that the dataset also has an impact. The exact nature of it is still an open question.
While the previous experiments suggest that our clique structure is
instrumental in obtaining good performance, one may wonder whether
intra-clique full connectivity is actually necessary.
Figure~\ref{fig:d-cliques-intra-connectivity} shows the convergence speed of
D-Cliques when cliques have been sparsified by randomly removing 1
or
5 edges per clique (out of 45). Strikingly, both for MNIST and
CIFAR10, sparsifying the cliques even slightly has significant effect on the
convergence speed. In the case of CIFAR10, it even entirely negates the
benefits of D-Cliques.
Overall, these experiments show that achieving fast convergence on non-IID
data with sparse topologies requires a very careful design, as we have
proposed with D-Cliques.
\begin{figure}[t]
\centering
\begin{subfigure}[htbp]{0.48\textwidth}
% To regenerate the figure, from directory results/mnist
% python ../../../learn-topology/tools/plot_convergence.py no-init/fully-connected-cliques/all/2021-03-12-11:12:01-CET rm-1-edge/all/2021-03-18-17:28:27-CET rm-5-edges/all/2021-03-18-17:29:10-CET rm-1-edge-unbiased-grad/all/2021-03-18-17:28:47-CET rm-5-edges-unbiased-grad/all/2021-03-18-17:29:36-CET --add-min-max --ymin 85 --ymax 92.5 --legend 'lower right' --yaxis test-accuracy --labels 'fcc with clique grad.' 'fcc -1 edge/clique, no clique avg.' 'fcc -5 edges/clique, no clique avg.' 'fcc -1 edge/clique, clique avg.' 'fcc -5 edges/clique, clique avg.' --save-figure ../../figures/d-cliques-mnist-clique-clustering-fcc.png --font-size 13
\includegraphics[width=\textwidth]{figures/d-cliques-mnist-clique-clustering-fcc}
\caption{\label{fig:d-cliques-mnist-clique-clustering} MNIST}
\end{subfigure}
\hfill
\begin{subfigure}[htbp]{0.48\textwidth}
% To regenerate the figure, from directory results/cifar10
% python ../../../learn-topology/tools/plot_convergence.py no-init/fully-connected-cliques/all/2021-03-13-18:32:55-CET rm-1-edge/all/2021-03-18-17:29:58-CET rm-5-edges/all/2021-03-18-17:30:38-CET rm-1-edge-unbiased-grad/all/2021-03-18-17:30:17-CET rm-5-edges-unbiased-grad/all/2021-03-18-17:31:04-CET --add-min-max --ymax 80 --legend 'upper left' --yaxis test-accuracy --labels 'fcc, clique grad.' 'fcc -1 edge/clique, no clique grad.' 'fcc -5 edges/clique, no clique grad.' 'fcc -1 edge/clique, clique grad.' 'fcc -5 edges/clique, clique grad.' --save-figure ../../figures/d-cliques-cifar10-clique-clustering-fcc.png --font-size 13
\includegraphics[width=\textwidth]{figures/d-cliques-cifar10-clique-clustering-fcc}
\caption{\label{fig:d-cliques-cifar10-clique-clustering} CIFAR10}
\end{subfigure}
\caption{\label{fig:d-cliques-intra-connectivity} Importance of Intra-Clique Full-Connectivity}
%\section{Scaling with Different Inter-Clique Topologies}
\label{section:interclique-topologies}
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
So far, we have used a fully-connected inter-clique topology for D-Cliques,
which bounds the
average shortest path to $2$ between any pair of nodes. This uses $\frac{n}{c}
(\frac{n}{c} - 1)$ inter-clique edges, which scales quadratically in the
number of nodes. This can become significant at larger scales when $n$ is
large compared to $c$.
In this last series of experiment, we evaluate the effect of the choice of
inter-clique topology on the convergence speed for a larger network of 1000
nodes. We compare the scalability and convergence speed of several
D-Cliques variants, which all use $O(nc)$ edges
to create cliques as a starting point.
The inter-clique topology with (almost) fewest edges is a \textit{ring}, which
uses $\frac{n}{c} - 1$ inter-clique edges and therefore scales linearly in $O
(n)$.
Another topology scales linearly with a logarithmic bound on the
average shortest number of hops between two nodes: we call it
\textit{fractal}. In this hierarchical scheme, cliques are
assembled in
larger groups of $c$ cliques that are connected internally with one edge per pair of cliques, but with only one edge between pairs of larger groups. The scheme is recursive such that $c$ groups will themselves form a larger group at the next level up. This results in at most $nc$ edges per node if edges are evenly distributed, and therefore also scales linearly in the number of nodes.
Finally, we propose to connect cliques according to a
small-world-like~\cite{watts2000small} topology applied on top of a
ring~\cite{stoica2003chord}. In this scheme, cliques are first arranged in a
ring. Then each clique adds symmetric edges, both clockwise and
counter-clockwise on the ring, with the $ns$ closest cliques in sets of
cliques that are exponentially bigger the further they are on the ring (see
Algorithm~\ref{Algorithm:Smallworld} in the appendix for
details on the construction). This ensures a good clustering with other
cliques that are close on the ring, while still keeping the average shortest
path small. This scheme uses $2(ns)log(\frac{n}{c})$ inter-clique edges and
therefore grows in the order of $O(n + log(n))$ with the number of nodes.
Figure~\ref{fig:d-cliques-cifar10-convolutional} shows the convergence speed
for all schemes on MNIST and CIFAR10, compared to the ideal baseline of a
single IID node performing the same number of updates per epoch (showing the
fastest convergence speed achievable if topology had no impact). The ring
topology converges but is much slower, while our fractal scheme helps
significantly. The sweet spot appears to be with the small-world
topology, as the convergence speed is almost the same as with a
fully-connected inter-clique topology but with 22\% less edges
(14.5 edges on average instead of 18.9). We can also expect bigger gains at
larger scales. Nonetheless, even the fully-connected topology offers
significant benefits with 1000 nodes, as it represents a 98\% reduction in the
number of edges compared to fully connecting individual nodes (18.9 edges on
average instead of 999) and a 96\% reduction in the number of messages (37.8
messages per round per node on average instead of 999). Overall, these results
show that D-Cliques can scale nicely with the number of nodes.
\centering
% To regenerate the figure, from directory results/mnist
% python ../../../learn-topology/tools/plot_convergence.py 1-node-iid/all/2021-03-10-09:20:03-CET ../scaling/1000/mnist/fully-connected-cliques/all/2021-03-14-17:56:26-CET ../scaling/1000/mnist/smallworld-logn-cliques/all/2021-03-23-21:45:39-CET ../scaling/1000/mnist/fractal-cliques/all/2021-03-14-17:41:59-CET ../scaling/1000/mnist/clique-ring/all/2021-03-13-18:22:36-CET --add-min-max --yaxis test-accuracy --legend 'lower right' --ymin 84 --ymax 92.5 --labels '1 node IID' 'd-cliques (fully-connected cliques)' 'd-cliques (smallworld)' 'd-cliques (fractal)' 'd-cliques (ring)' --save-figure ../../figures/d-cliques-mnist-1000-nodes-comparison.png --font-size 13
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{figures/d-cliques-mnist-1000-nodes-comparison}
\caption{\label{fig:d-cliques-mnist-1000-nodes-comparison} MNIST with Linear}
\hfill
% To regenerate the figure, from directory results/cifar10
% python ../../../learn-topology/tools/plot_convergence.py 1-node-iid/all/2021-03-10-13:52:58-CET ../scaling/1000/cifar10/fully-connected-cliques/all/2021-03-14-17:41:20-CET ../scaling/1000/cifar10/smallworld-logn-cliques/all/2021-03-23-22:13:57-CET ../scaling/1000/cifar10/fractal-cliques/all/2021-03-14-17:42:46-CET ../scaling/1000/cifar10/clique-ring/all/2021-03-14-09:55:24-CET --add-min-max --yaxis test-accuracy --labels '1-node IID' 'd-cliques (fully-connected cliques)' 'd-cliques (smallworld)' 'd-cliques (fractal)' 'd-cliques (ring)' --legend 'lower right' --save-figure ../../figures/d-cliques-cifar10-1000-vs-1-node-test-accuracy.png --font-size 13
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{figures/d-cliques-cifar10-1000-vs-1-node-test-accuracy}
\caption{\label{fig:d-cliques-cifar10-1000-vs-1-node-test-accuracy} CIFAR10 with LeNet}
\end{subfigure}
\caption{\label{fig:d-cliques-cifar10-convolutional} D-Cliques Convergence Speed with 1000 nodes, non-IID, Constant Updates per Epoch, with Different Inter-Clique Topologies.}
\end{figure}
In this section, we review some related work on dealing with non-IID data in
FL, and on the role of topology in fully decentralized algorithms.
\paragraph{Dealing with non-IID data in server-based FL.}
clients send their parameters to the server after each gradient update,
problems arise when one seeks to reduce
the number of communication rounds by allowing each participant to perform
multiple local updates, as in the popular FedAvg algorithm
\cite{mcmahan2016communication}. Indeed, non-IID data can prevent the
algorithm from
converging to a good solution in this case. This led to the design of
extensions that are specifically designed to mitigate the impact of non-IID
data when performing
multiple local updates, using adaptive client sampling \cite{quagmire}, update
corrections \cite{scaffold} or regularization in the local objective
\cite{fedprox}. Another direction is to embrace the non-IID scenario by
learning personalized models for each client
\cite{smith2017federated,perso_fl_mean,maml,moreau}.
We note that recent work explores rings of server-based topologies
\cite{tornado}, but the focus is not on dealing with non-IID data but
to make server-based FL more scalable to a large number of clients.
\paragraph{Dealing with non-IID data in fully decentralized FL.}
Non-IID data is known to negatively impact the convergence speed
of fully decentralized FL algorithms in practice \cite{jelasity}. Aside from approaches that aim to learn personalized models \cite{Vanhaesebrouck2017a,Zantedeschi2020a}, this
motivated the design of algorithms with modified updates based on variance
reduction \cite{tang18a}, momentum correction \cite{momentum_noniid},
cross-gradient
aggregation \cite{cross_gradient}, or multiple averaging steps
between updates (see \cite{consensus_distance} and references therein). These
algorithms
typically require additional communication and/or computation, and have been
only evaluated in small-scale networks with a few tens of nodes.\footnote{We
also observed that \cite{tang18a} is subject to numerical
instabilities when run on topologies other than rings. When
the rows and columns of $W$ do not exactly
sum to $1$ (due to finite precision), these small differences get amplified by
the proposed updates and make the algorithm diverge.}
% non-IID known to be a problem for fully decentralized FL. cf Jelasity paper
% D2 and other recent papers on modifying updates: Quasi-Global Momentum,
% Cross-Gradient Aggregation
% papers using multiple averaging steps
% also our personalized papers
% D2 \cite{tang18a}: numerically unstable when $W_{ij}$ rows and columns do not exactly
% sum to $1$, as the small differences are amplified in a positive feedback loop. More work is therefore required on the algorithm to make it usable with a wider variety of topologies. In comparison, D-cliques do not modify the SGD algorithm and instead simply removes some neighbor contributions that would otherwise bias the direction of the gradient. D-Cliques with D-PSGD are therefore as tolerant to ill-conditioned $W_{ij}$ matrices as regular D-PSGD in an IID setting.
In contrast, D-Cliques focuses on the design of a sparse topology which is
able to compensate for the effect of non-IID data and scales to large
networks. We do not modify the simple
and efficient D-SGD
algorithm \cite{lian2017d-psgd} beyond removing some neighbor
contributions
that would otherwise bias the direction of the gradient.
% An originality of our approach is to focus on the effect of topology
% level without significantly changing the original simple and efficient D-SGD
% algorithm \cite{lian2017d-psgd}. Other work to mitigate the effect of non-IID
% data on decentralized algorithms are based on performing modified updates (eg
% with variance reduction) or multiple averaging steps.
\paragraph{Impact of topology in fully decentralized FL.} It is well
known
that the choice of network topology can affect the
convergence of fully decentralized algorithms: this is typically accounted
for in the theoretical convergence rate by a dependence on the spectral gap of
the network, see for instance
\cite{Duchi2012a,Colin2016a,lian2017d-psgd,Nedic18}.
However, for IID data, practice contradicts these classic
results: fully decentralized algorithms converge essentially as fast
on sparse topologies like rings or grids as they do on a fully connected
\cite{neglia2020,consensus_distance} sheds light on this phenomenon with refined convergence analyses based on differences between gradients or parameters across nodes, which are typically
smaller in the IID case. However, these results do not give any clear insight
regarding the role of the topology in the non-IID case. We note that some work
has gone into designing efficient topologies to optimize the use of
network resources (see e.g., \cite{marfoq}), but this is done independently
of how data is distributed across nodes. In summary, the role
of topology in the non-IID data scenario is not well understood and we are not
aware of prior work focusing on this question. Our work shows that an
appropriate choice of data-dependent topology can effectively compensate for
non-IID data.
\section{Conclusion}
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
We proposed D-Cliques, a sparse topology that recovers the convergence
speed of a fully-connected topology in the presence of local class bias.
D-Cliques is based on assembling cliques of nodes such that their joint local
distribution is representative of the global distribution so as to locally
recover IIDness. Cliques are joined in a sparse inter-clique topology so that
they quickly converge to the same model. We proposed Clique
Averaging to remove the non-IID bias in gradient computation by
averaging gradients only with other nodes within the clique. Clique Averaging
can in turn be used to implement unbiased momentum to recover the convergence
speed usually only possible with IID mini-batches. Through our experiments, we
showed that the clique structure of D-Cliques is critical in obtaining these
results and that a small-world inter-clique topology with $O(n
+ log (n))$ edges seems to achieve the best compromise between
convergence speed and scalability with the number of nodes.
D-Cliques thus appears to be promising to reduce bandwidth
usage on FL servers and to implement fully decentralized alternatives in a
wider range of applications where global coordination is impossible or costly.
For instance, the presence and relative frequency of classes in each node
could be computed using PushSum~\cite{kempe2003gossip}, and the topology could
be constructed in a decentralized and adaptive way with
PeerSampling~\cite{jelasity2007gossip}. This will be investigated in future work.
We also believe that our ideas can be useful to deal
with more general types of data non-IIDness beyond the important case of
local class bias that we studied in this paper. An important example is
covariate shift or feature distribution skew \cite{kairouz2019advances}, where
local density estimates could be used as basis to construct cliques that
approximately
recover the global distribution.
Erick Lavoie
committed
%\section{Future Work}
%\begin{itemize}
% \item Non-uniform Class Representation
% \item End-to-End Wall-Clock Training Time, including Clique Formation
% \item Comparison to Shuffling Data in a Data Center
% \item Behaviour in the Presence of Churn
% \item Relaxing Clique Connectivity: Randomly choose a subset of clique neighbours to compute average gradient.
%\end{itemize}
%
% ---- Bibliography ----
%
% BibTeX users should specify bibliography style 'splncs04'.
% References will then be sorted and formatted in the correct style.
%
\bibliographystyle{splncs04}
\bibliography{main}
\section{Algorithms}
\begin{algorithm}[h]
\caption{D-Clique Construction}
\label{Algorithm:D-Clique-Construction}
\begin{algorithmic}[1]