Newer
Older
% This is samplepaper.tex, a sample chapter demonstrating the
% LLNCS macro package for Springer Computer Science proceedings;
% Version 2.20 of 2017/10/04
%
\documentclass[runningheads]{llncs}
%
\usepackage[utf8]{inputenc}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
%\usepackage{amsthm}
\usepackage{graphicx}
\usepackage{xcolor}
\usepackage{soul}
\usepackage{hyperref}
\usepackage{algorithm}
\usepackage{algpseudocode}
\usepackage{dsfont}
\usepackage{caption}
\usepackage{subcaption}
\input{macros}
% Used for displaying a sample figure. If possible, figure files should
% be included in EPS format.
%
% If you use the hyperref package, please uncomment the following line
% to display URLs in blue roman font according to Springer's eBook style:
% \renewcommand\UrlFont{\color{blue}\rmfamily}
\begin{document}
%
\title{D-Cliques: An Efficient Topology to Compensate for Non-IID Data in Decentralized Learning}
%
\titlerunning{D-Cliques}
% If the paper title is too long for the running head, you can set
% an abbreviated paper title here
%
\author{Aur\'elien Bellet\inst{1}\thanks{Authors in alphabetical order of last names, see Section 'Credits' for respective contributions.} \and
Anne-Marie Kermarrec\inst{2} \and
Erick Lavoie\inst{2}}
%
\authorrunning{A. Bellet, A-M. Kermarrec, E. Lavoie}
% First names are abbreviated in the running head.
% If there are more than two authors, 'et al.' is used.
%
\institute{Inria, Lille, France\\
\email{aurelien.bellet@inria.fr} \and
EPFL, Lausanne, Switzerland \\
\email{\{anne-marie.kermarrec,erick.lavoie\}@epfl.ch}\\
}
%
\maketitle % typeset the header of the contribution
%
\begin{abstract}
The convergence speed of machine learning models trained with Federated Learning is significantly affected by non-identically and independently distributed (non-IID) data partitions, even more so in a fully decentralized (serverless) setting. We propose the D-Cliques topology, which reduces gradient bias by grouping nodes in cliques such that their local joint distribution is representative of the global distribution. D-Cliques provide similar convergence speed as a fully-connected topology on MNIST and CIFAR10 with a significant reduction in the number of required edges and messages: at a scale of 1000 nodes, 98\% less edges and 96\% less total messages. We show how D-Cliques can be used to successfully implement momentum, critical to quickly train deep convolutional networks but otherwise detrimental in a non-IID setting. We finally show that, among many possible inter-clique topologies, a small-world topology that scales the number of edges logarithmically in the number of nodes converges almost as quickly as fully connecting cliques with a single edge pairwise. A smallworld topology thus provides a further 22\% reduction in the number of edges at 1000 nodes (14.6 vs 18.9 edges on average per node), which suggests bigger possible gains at larger scales.
\keywords{Decentralized Learning \and Federated Learning \and Topology \and
Non-IID Data \and Stochastic Gradient Descent}
% 1/ Decentralized FL approaches can be more scalable than Centralized FL approach when the number of nodes is large
% 2/ It is well known the topology can affect convergence of decentralized algorithms, as shown by classic convergence analysis. However the effect of topology has been observed to be often quite small in practice. This is because most of these results were obtained for iid data.
% 3/ In this paper, we show that the effect of topology is very significant for non-iid data. Unlike for centralized FL approaches, this happens even when nodes perform a single local update before averaging. We propose an approach to design a sparse data-aware topology which recovers the convergence speed of a centralized approach.
% 4/ An originality of our approach is to work at the topology level without changing the original efficient and simple D-SGD algorithm. Other work to mitigate the effect of non-iid on decentralized algorithms are based on performing modified updates (eg with variance reduction) or multiple averaging steps.
Machine learning is currently shifting from the classic \emph{centralized}
paradigm, in which models are trained on data located on a single machine or
in a data center, to more \emph{decentralized} ones.
Indeed, data is often inherently decentralized as it is collected by several
parties (such as different hospitals, companies, personal devices...).
Federated Learning (FL) allows a set
of data owners to collaboratively train machine learning models
on their joint
data while keeping it decentralized, thereby avoiding the costs of moving
data as well as mitigating privacy and confidentiality concerns
\cite{kairouz2019advances}.
Due to the decentralized nature of data collection, the local datasets of
participants can be very different in size and distribution: they are
\emph{not} independent and identically distributed
(non-IID). In particular, the class distributions may vary a lot
across local datasets \cite{quagmire}.
Therefore, one of the key challenges in FL is to design algorithms that
can efficiently deal with such non-IID data
\cite{kairouz2019advances,fedprox,scaffold,quagmire}.
Federated learning algorithms can be classified into two categories depending
on the network topology they work on. In server-based FL, the network is
organized as a star: a central server orchestrates the training process and
iteratively aggregates model updates received from the participants
(\emph{clients}) and sends
them back the aggregated model \cite{mcmahan2016communication}. In contrast,
fully decentralized FL algorithms operate over an arbitrary topology where
participants communicate in a peer-to-peer fashion with their direct neighbors
in the network graph. A classic example of such algorithms is Decentralized
SGD (D-SGD) \cite{lian2017d-psgd}, in which participants alternate between
local SGD updates and model averaging with neighboring nodes.
In this work, we focus on fully decentralized algorithms as they can
generally
scale better to the large number of participants seen in ``cross-device''
applications \cite{kairouz2019advances}. Indeed, while a central
server quickly becomes a bottleneck as the number of participants increases, the topology used in fully decentralized algorithms can remain sparse
enough such that all participants have small (constant or logarithmic) degree
\cite{lian2017d-psgd}. Recent work has shown both empirically
\cite{lian2017d-psgd,Lian2018} and theoretically \cite{neglia2020} that sparse
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
topologies like rings or grids do not significantly affect the convergence
rate compared to using denser topologies when data is IID.
% We also note that full decentralization can also provide benefits in terms of
% privacy protection \cite{amp_dec}.
In contrast to the IID case, we show in this work that \emph{the impact of
topology is very significant for non-IID data}. This phenomenon is illustrated
in Figure~\ref{fig:iid-vs-non-iid-problem}, where we see that using a ring or
grid topology completely jeopardize the convergence rate in practice when
classes are imbalanced across participants.
We stress that the fact unlike for centralized FL approaches
\cite{kairouz2019advances,scaffold,quagmire}, this
happens even when nodes perform a single local update before averaging the
mode with their neighbors. We thus study the following question:
% \textit{Are there regular topologies, i.e. where all nodes have similar or the same number of neighbours, with less connections than a fully-connected graph that retain a similar convergence speed and non-IID behaviour?}
\textit{Are there sparse topologies that allow a similar convergence
speed as the fully connected graph under a large number of participants with
non-IID class distribution?}
We answer this question in the affirmative by proposing \textsc{D-Cliques}, an approach to
design \emph{a sparse data-aware topology which allows to recover the convergence
speed of a centralized (or IID) approach}. Our proposal includes a simple
modification of the standard D-SGD algorithm which ensures that gradients are
unbiased with respect to the class distribution.
We empirically evaluate our approach on MNIST and CIFAR10 datasets using
logistic
regression and deep convolutional models with up to 1000 participants. This is
in contrast to most previous work on fully decentralized algorithms
considering only a few tens of participants \cite{tang18a,more_refs}, which
fall short of
giving a realistic view of the performance of these algorithms in actual
applications.
\aurelien{TODO: complete above paragraph with more details and highlighting
other contributions as needed}
To summarize, our contributions are as follows:
\begin{enumerate}
Erick Lavoie
committed
\item we show the significant impact of topology on convergence speed in the presence of non-IID data in decentralized learning;
\item we propose the D-Cliques topology to remove the impact of non-IID data on convergence speed, similar to a fully-connected topology. At a scale of 1000 nodes, this represents a 98\% reduction in the number of edges ($18.9$ vs $999$ edges per node on average) and a 96\% reduction in the total number of required messages;
\item we show how to leverage cliques to: (1) remove gradient bias that originate from inter-clique edges;
(2) implement momentum, a critical optimization technique to quickly train convolutional networks, that otherwise significantly \textit{decreases} convergence speed in the presence of non-IID data;
\item we show that, among the many possible choices of inter-clique topologies, a smallworld topology provides a convergence speed close to fully-connecting all cliques pairwise, but requires only $O(n + log(n))$ instead of $O(n^2)$ edges where $n$ is the number of nodes. At a scale of 1000 nodes, this represents a further 22\% reduction in the number of edges compared to fully-connecting cliques ($14.6$ vs $18.9$ edges per node on average) and suggests possible bigger gains at larger scales.
\end{enumerate}
The rest of this paper is organized as follows. \aurelien{TO COMPLETE}
\begin{figure}
\centering
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/ring-IID-vs-non-IID}
\caption{\label{fig:ring-IID-vs-non-IID} Ring: (almost) minimal connectivity.}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/grid-IID-vs-non-IID}
\caption{\label{fig:grid-IID-vs-non-IID} Grid: intermediate connectivity.}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/fully-connected-IID-vs-non-IID}
\caption{\label{fig:fully-connected-IID-vs-non-IID} Fully-connected: maximal connectivity.}
\end{subfigure}
\caption{IID vs non-IID Convergence Speed. Thin lines are the minimum
and maximum accuracy of individual nodes. Bold lines are the average
accuracy across all nodes.\protect \footnotemark \aurelien{TODO: make
the figure more self-contained (adding all information to
understand the setting) and perhaps add a fig to show the
effect of D-Cliques}}
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
\label{fig:iid-vs-non-iid-problem}
\end{figure}
\footnotetext{This is different from the accuracy of the average model across nodes that is sometimes used once training is completed.}
\section{Problem Statement}
\label{section:problem}
A set of $n$ nodes $N = \{1, \dots, n \}$ communicates with their neighbours defined by the mixing matrix $W$ in which $W_{ij}$ defines the \textit{weight} of the outgoing connection from node $i$ to $j$. $W_{ij} = 0$ means that there is no connection from node $i$ to $j$ and $W_{ij} > 0$ means there is a connection.
Training data is sampled from a global distribution $D$ unknown to the nodes. Each node has access to an arbitrary partition of the samples that follows its own local distribution $D_i$. Nodes cooperate to reach consensus on a global model $M$ that performs well on $D$ by minimizing the average training loss on local models:
\begin{equation}
min_{x_i, i = 1, \dots, n} = \frac{1}{n}\sum_{i=1}^{n} \mathds{E}_{s_i \sim D_i} F_i(x_i;s_i)
\label{eq:dist-optimization-problem}
\end{equation}
such that $M= x_i = x_j, \forall i,j \in N$, where $x_i$ are the parameters of
node $i$'s local model, $s_i$ is a sample of $D_i$, $F_i$ is the loss function
on node $i$, and $\mathds{E}_{s_i \sim D_i} F_i(x_i;s_i)$ denotes the
expected value of $F_i$ on a random sample $s_i$ drawn from $D_i$.
\subsection{Non-IID Data}
Removing the assumption of \textit{independent and identically distributed} (IID) data opens a wide range of potential practical difficulties. While non-IID simply means that a local dataset is a biased sample of the global distribution $D$, the difficulty of the learning problem depends on additional factors that compound with that bias. For example, an imbalance in the number of examples for each class represented in the global distribution compounds with the position of the nodes that have the examples of the rarest class. Additionally, if two local datasets have different number of examples, the examples in the smaller dataset will be visited more often than those in a larger dataset, potentially skewing the optimisation process to perform better on the examples seen more often.
To focus our study while still retaining the core aspects of the problem, we make the following assumptions: (1) all classes are equally represented in the global dataset, by randomly removing examples from the larger classes if necessary; (2) all classes are represented on the same number of nodes; (3) all nodes have the same number of examples. Within those assumptions, we take the hardest possible problem, which is to have each node having examples of only a single class. For the following experiments, we use the MNIST (CITE) and CIFAR10 (CITE) datasets.
\subsection{Learning Algorithm}
We use the Decentralized-Parallel Stochastic Gradient Descent, aka D-PSGD~\cite{lian2017d-psgd}, illustrated in Algorithm~\ref{Algorithm:D-PSGD}. A single step consists of sampling the local distribution $D_i$, computing and applying a stochastic gradient descent (SGD) step with regard to that sample, and averaging the model with its neighbours. Both outgoing and incoming weights of $W$ must sum to 1, i.e. $W$ is doubly stochastic ($\sum_{j \in N} W_{ij} = 1$ and $\sum_{j \in N} W_{ji} = 1$), and communication is symmetric, i.e. $W_{ij} = W_{ji}$.
\begin{algorithm}[h]
\caption{D-PSGD, Node $i$}
\label{Algorithm:D-PSGD}
\begin{algorithmic}[1]
\State \textbf{Require} initial model parameters $x_i^{(0)}$, learning rate $\gamma$, mixing weights $W$, number of steps $K$, loss function $F$
\For{$k = 1,\ldots, K$}
\State $s_i^{(k)} \gets \textit{sample from~} D_i$
\State $x_i^{(k-\frac{1}{2})} \gets x_i^{(k-1)} - \gamma \nabla F(x_i^{(k-1)}; s_i^{(k)})$
\State $x_i^{(k)} \gets \sum_{j \in N} W_{ji}^{(k)} x_j^{(k-\frac{1}{2})}$
\EndFor
\end{algorithmic}
\end{algorithm}
D-PSGD can be used with a variety of models, including deep learning networks. To remove the impact of particular architectural choices on our results, we use a linear classifier (CITE). This model provides up to 92.5\% accuracy when fully converged on MNIST (CITE), about 7\% less than state-of-the-art deep learning networks (CITE).
%\subsection{Clustering}
%
%From the perspective of one node, \textit{clustering} intuitively represents how many connections exist between its immediate neighbours. A high level of clustering means that neighbours have many edges between each other. The highest level is a \textit{clique}, where all nodes in the neighbourhood are connected to one another. Formally, the level of clustering, between $0$ and $1$, is the ratio of $\frac{\textit{nb edges between neighbours}}{\textit{nb possible edges}}$~\cite{watts2000small}.
%
Erick Lavoie
committed
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
\section{Motivation: Bias in Gradient Averaging with Non-IID Data}
\aurelien{I think this should go into the approach section, to motivate it.
In the introduction, maybe we can just give the main intuitions in a few
sentences?}
To have a preliminary intuition of the impact of non-IID data on convergence speed, examine the local neighbourhood of a single node in a grid similar to that used to obtain results in Figure~\ref{fig:grid-IID-vs-non-IID}, as illustrated in Figure~\ref{fig:grid-iid-vs-non-iid-neighbourhood}. The color of a node, represented as a circle, corresponds to one of the 10 available classes in the dataset. In this IID setting (Figure~\ref{fig:grid-iid-neighbourhood}), each node has examples of all ten classes in equal proportions. In this (rather extreme) non-IID case (Figure~\ref{fig:grid-non-iid-neighbourhood}), each node has examples of only a single class and nodes are distributed randomly in the grid, with neighbourhood such as this one, sometimes having nodes with examples of the same class adjacent to each other.
\begin{figure}
\centering
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/grid-iid-neighbourhood}
\caption{\label{fig:grid-iid-neighbourhood} IID}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/grid-non-iid-neighbourhood}
\caption{\label{fig:grid-non-iid-neighbourhood} Non-IID}
\end{subfigure}
\caption{Neighbourhood in an IID and non-IID Grid.}
\label{fig:grid-iid-vs-non-iid-neighbourhood}
\end{figure}
For the sake of the argument, assume all nodes are initialized with the same model weights, which is not critical for quick convergence in an IID setting but makes the comparison easier. A single training step, from the point of view of the middle node of the illustrated neighbourhood, is equivalent to sampling a mini-batch five times larger from the union of the local distributions of the five illustrated nodes.
In the IID case, since gradients are computed from examples of all classes, the resulting average gradient will point in a direction that lowers the loss for all classes. This is the case because the components of the gradient that would only improve the loss on a subset of the classes to the detriment of others are cancelled by similar but opposite components from other classes. Therefore only the components that improve the loss for all classes remain. There is some variance remaining from the difference between examples but in practice it has a sufficiently small impact on convergence speed that there are still benefits from parallelizing the computations.
Erick Lavoie
committed
However, in the (rather extreme) non-IID case illustrated, there are not enough nodes in the neighbourhood to remove the bias of the classes represented. Even if all nodes start from the same model weights, they will diverge from one another according to the classes represented in their neighbourhood, more than they would have had in the IID case. As the distributed averaging algorithm takes several steps to converge, this variance is never fully resolved and the variance remains between steps.\footnote{It is possible, but impractical, to compensate for this effect by averaging multiple times before the next gradient computation. In effect, this trades connectivity (number of edges) for latency to give the same convergence speed, in number of gradients computed, as a fully connected graph.} This additional variance biases subsequent gradient computations as the gradients are computed further away from the global average, in addition to being computed from different examples. As shown in Figure~\ref{fig:ring-IID-vs-non-IID} and \ref{fig:grid-IID-vs-non-IID}, this significantly slows down convergence speed to the point of making parallel optimization impractical.
\subsection{D-Cliques}
Erick Lavoie
committed
\aurelien{this should definitely go to approach section}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.4\textwidth]{figures/fully-connected-cliques}
\caption{\label{fig:d-cliques-example} D-Cliques: Connected Cliques of Dissimilar Nodes, Locally Representative of the Global Distribution}
\end{figure}
If we relax the constraint of regularity, a trivial solution is a star topology, as used in most Federated Learning implementations (CITE) at the expense of a high requirement on reliability and available bandwidth on the central node. We instead propose a regular topology, built around \textit{cliques} of dissimilar nodes, locally representative of the global distribution and connected by few links, as illustrated in Figure~\ref{fig:d-cliques-example}. D-Cliques enable similar convergence speed as a fully connected topology, using a number of edges that grows sub-exponentially ($O(nc + \frac{n^2}{c^2})$ where $n$ is the number of nodes and $c$ is the size of a clique\footnote{$O((\frac{n}{c})c^2 + (\frac{n}{c})^2)$, i.e. number of cliques times the number of edges within cliques (squared in the size of cliques) in addition to inter-cliques edges (square of the number of cliques).}.), instead of exponentially in the number of nodes ($O(n^2)$), with a corresponding reduction in bandwidth usage and required number of messages per round of training. In practice, for the cases with networks of size 100 we have tested, that corresponds to a reduction in the number of edges of 90\%. (TODO: Do analysis if the pattern is fractal with three levels at 1000 nodes: cliques, 10 cliques connected pairwise in a "region", and each "region" connected pairwise with other regions)
Because the data distribution within each clique is representative of the global distribution, we can recover optimization techniques that rely on an IID assumption, in a distributed setting that is not. As one example, we show how momentum (CITE) can be used with D-Cliques to greatly improve convergence speed of convolutional networks, as in a centralized IID setting, even though the technique is otherwise \textit{detrimental} in a more general non-IID setting.
Three Main ideas:
\begin{itemize}
\item Create cliques such that the clique distribution is representative of the global distribution
\item Connect cliques, based on level of "redundancy" in the datasets
\item Decouple gradient averaging from weight averaging
\end{itemize}
\subsection{Creating Representative Cliques}
The degree of \textit{skew} of local distributions $D_i$, i.e. how much the local distribution deviates from the global distribution on each node, influences the minimal size of cliques.
The global distribution of classes, for classification tasks, can be computed from the distribution of class examples on the nodes, with Distributed Averaging (CITE). Given the global distribution of classes, neighbours within cliques can be chosen based on a PeerSampling (CITE) service. Both services can be implemented such that they converge in a logarithmic number of steps compared to the number of nodes. It is therefore possible to obtain this information in a scalable way.
In the rest of this paper, we assume these services are available and show that the approach provides a useful convergence speed after the cliques have been formed.
TODO: Algo de construction des cliques (incluant les cas où les classes ne sont pas également représentées)
\subsection{Connecting Cliques}
The \textit{"redundancy"} of the data, i.e. how much each additional example in the training set contributes to the final accuracy, influences the minimum number of connections required between cliques to reach a given convergence speed. It needs to be evaluated empirically on a learning task. In effect, redundancy is the best parallelization factor as the more redundant the dataset is, the less nodes need to communicate. For the following arguments, $n$ is the number of nodes and $c$ is the size of a clique.
For highly redundant datasets, it may be sufficient to arrange cliques in a ring. This is not specific to D-Cliques, it is also the case with IID nodes but it is nonetheless useful to be kept in mind for D-Cliques also. In this case, the number of edges will be $O(nc + \frac{n}{c})$ and therefore linear in the number of nodes $n$.
For cases with limited redundancy, nodes can be arranged such that they are at most 2 hops away from any other nodes in the network to quickly propagate updates in the network. In effect, this is equivalent to fully connecting cliques (instead of nodes). In this case, the number of edges will be $O(nc + \frac{n^2}{c^2})$ and therefore still exponential in the number of nodes but with a strong reduction in the number of edges when $c$ is large compared to $n$ (ex: $c \geq \frac{n}{100}$).
In between, there might be enough redundancy in the dataset to arrange cliques in a fractal/hierarchical pattern such that the maximum number of hops between nodes grows logarithmically with $n$. TODO: Complexity argument.
\subsection{Decoupling Gradient Averaging from Weight Averaging}
Inter-clique connections create sources of bias. The distributed averaging algorithm, used by D-PSGD, relies on a good choice of weights for quick convergence, of which Metropolis-Hasting (CITE) provide a reasonable and inexpensive solution by considering only the immediate neighbours of every node. However, by averaging models after a gradient step, D-PSGD effectively gives a different weight to the gradient of neighbours.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.7\textwidth]{figures/connected-cliques-bias}
\caption{\label{fig:connected-cliques-bias} Sources of Bias in Connected Cliques: Non-uniform weights in neighbours of A (A has a higher weight); Non-uniform class representation in neighbours of B (extra green node).}
\end{figure}
Figure~\ref{fig:connected-cliques-bias} illustrates the problem with the simplest case of two cliques connected by one inter-clique edge connecting the green node of the left clique with the purple node of the right clique. A simple Metropolis-Hasting weight assignment such as the following:
\begin{equation}
W_{ij} = \begin{cases}
max(\text{degree}(i), \text{degree}(j)) + 1 & \text{if}~i \neq j \\
1 - \sum_{j \neq i} W_{ij} & \text{otherwise}
\end{cases}
\end{equation}
Node A will have a weight of $\frac{12}{110}$ while all of A's neighbours will have a weight of $\frac{11}{110}$, except the green node connected to B, that will have a weight of $\frac{10}{110}$. This weight assignment therefore biases the gradient towards A's class and aways from the green class. The same analysis holds for all other nodes without inter-clique edges. For node B, all neighbours and B will have weights of $\frac{1}{11}$. However, the green class is represented twice while all other classes are represented only once. This biases the gradient toward the green class. The combined effect of these two sources of bias is to increase the variance between models after a D-PSGD step of training.
We solve this problem by decoupling the gradient averaging from the weight averaging by sending each in separate rounds of messages. Only the gradients of neighbours within the same clique are used to compute the average gradient, which provides an equal representation to all classes in the computation of the average gradient. But the model weights of all neighbours, including those across inter-clique edges, are used for computing the distributed average of models, which ensures that all models eventually converge to the same value. The clique-unbiased version of D-PSGD is listed in Algorithm~\ref{Algorithm:Clique-Unbiased-D-PSGD}.
\begin{algorithm}[h]
\caption{D-Clique (Clique-Unbiased D-PSGD), Node $i$}
\label{Algorithm:Clique-Unbiased-D-PSGD}
\begin{algorithmic}[1]
\State \textbf{Require} initial model parameters $x_i^{(0)}$, learning rate $\gamma$, mixing weights $W$, number of steps $K$, loss function $F$
\For{$k = 1,\ldots, K$}
\State $s_i^{(k)} \gets \textit{sample from~} D_i$
\State $g_i^{(k)} \gets \frac{1}{|\textit{Clique}(i)|}\sum_{j \in \textit{Clique(i)}} \nabla F(x_j^{(k-1)}; s_j^{(k)})$
\State $x_i^{(k-\frac{1}{2})} \gets x_i^{(k-1)} - \gamma g_i^{(k)}$
\State $x_i^{(k)} \gets \sum_{j \in N} W_{ji}^{(k)} x_j^{(k-\frac{1}{2})}$
\EndFor
\end{algorithmic}
\end{algorithm}
\section{Evaluation}
\subsection{MNIST and Linear Model}
\begin{figure}[htbp]
% To regenerate the figure, from directory results/mnist
% python ../../../learn-topology/tools/plot_convergence.py 1-node-iid/all/2021-03-10-09:20:03-CET fully-connected/all/2021-03-10-09:25:19-CET clique-ring/all/2021-03-10-18:14:35-CET fully-connected-cliques/all/2021-03-10-10:19:44-CET --add-min-max --yaxis test-accuracy --labels '1-node IID bsz=12800' '100-nodes non-IID fully-connected bsz=128' '100-nodes non-IID D-Cliques (Ring) bsz=128' '100-nodes non-IID D-Cliques (Fully-Connected) bsz=128' --legend 'lower right' --ymin 85 --ymax 92.5 --save-figure ../../figures/d-cliques-mnist-vs-1-node-test-accuracy.png
\includegraphics[width=0.7\textwidth]{figures/d-cliques-mnist-vs-1-node-test-accuracy}
Erick Lavoie
committed
\caption{\label{fig:d-cliques-mnist-linear-w-clique-averaging-w-initial-averaging} MNIST: D-Cliques Convergence Speed (100 nodes, Constant Updates per Epoch)}
% To regenerate the figure, from directory results/mnist
% python ../../../learn-topology/tools/plot_convergence.py 1-node-iid/all/2021-03-10-09:20:03-CET ../scaling/1000/mnist/fully-connected-cliques/all/2021-03-14-17:56:26-CET ../scaling/1000/mnist/smallworld-logn-cliques/all/2021-03-23-21:45:39-CET ../scaling/1000/mnist/fractal-cliques/all/2021-03-14-17:41:59-CET ../scaling/1000/mnist/clique-ring/all/2021-03-13-18:22:36-CET --add-min-max --yaxis test-accuracy --legend 'lower right' --ymin 84 --ymax 92.5 --labels '1 node IID' 'd-cliques (fully-connected cliques)' 'd-cliques (smallworld)' 'd-cliques (fractal)' 'd-cliques (ring)' --save-figure ../../figures/d-cliques-mnist-1000-nodes-comparison.png
\begin{figure}[htbp]
\centering
\includegraphics[width=0.7\textwidth]{figures/d-cliques-mnist-1000-nodes-comparison}
Erick Lavoie
committed
\caption{\label{fig:d-cliques-mnist-1000-nodes-comparison} MNIST: D-Clique Convergence Speed (1000 nodes, Constant Updates per Epoch)}
\begin{figure}[htbp]
\centering
% To regenerate the figure, from directory results/mnist
% python ../../../learn-topology/tools/plot_convergence.py clique-ring/all/2021-03-10-18:14:35-CET no-clique-avg/clique-ring/all/2021-03-12-10:40:37-CET no-init/clique-ring/all/2021-03-12-10:40:11-CET no-init-no-clique-avg/clique-ring/all/2021-03-12-10:41:03-CET --add-min-max --yaxis test-accuracy --labels ' 'with uniform init., with clique avg.' 'with uniform init., without clique avg.' 'without uniform init., with clique avg.' 'without uniform init., without clique avg.' --legend 'lower right' --ymin 85 --ymax 92.5 --save-figure ../../figures/d-cliques-mnist-init-clique-avg-effect-ring-test-accuracy.png
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/d-cliques-mnist-init-clique-avg-effect-ring-test-accuracy}
\caption{\label{fig:d-cliques-mnist-init-clique-avg-effect-ring-test-accuracy} Ring}
\end{subfigure}
% To regenerate the figure, from directory results/mnist
%python ../../../learn-topology/tools/plot_convergence.py fully-connected-cliques/all/2021-03-10-10:19:44-CET no-clique-avg/fully-connected-cliques/all/2021-03-12-11:12:26-CET no-init/fully-connected-cliques/all/2021-03-12-11:12:01-CET no-init-no-clique-avg/fully-connected-cliques/all/2021-03-12-11:12:49-CET --add-min-max --yaxis test-accuracy --labels 'with uniform init., with clique avg.' 'with uniform init., without clique avg.' 'without uniform init., with clique avg.' 'without uniform init., without clique avg.' --legend 'lower right' --ymin 85 --ymax 92.5 --save-figure ../../figures/d-cliques-mnist-init-clique-avg-effect-fcc-test-accuracy.png
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/d-cliques-mnist-init-clique-avg-effect-fcc-test-accuracy}
\caption{\label{fig:d-cliques-mnist-init-clique-avg-effect-fcc-test-accuracy} Fully-Connected}
\end{subfigure}
\caption{\label{fig:d-cliques-mnist-init-clique-avg-effect} MNIST: Effects of Clique Averaging and Uniform Initialization on Convergence Speed. (100 nodes, non-IID, D-Cliques, bsz=128)}
% % To regenerate the figure, from directory results/scaling
%% python ../../../learn-topology/tools/plot_convergence.py 10/mnist/fully-connected-cliques/all/2021-03-10-14:40:35-CET ../mnist/fully-connected-cliques/all/2021-03-10-10:19:44-CET 1000/mnist/fully-connected-cliques/all/2021-03-10-16:44:35-CET --labels '10 nodes bsz=128' '100 nodes bsz=128' '1000 nodes bsz=128 (45)' --legend 'lower right' --yaxis test-accuracy --save-figure ../../figures/d-cliques-mnist-scaling-fully-connected-cst-bsz.png --ymin 80 --add-min-max
% \begin{subfigure}[b]{0.48\textwidth}
% \centering
% \includegraphics[width=\textwidth]{figures/d-cliques-mnist-scaling-fully-connected-cst-bsz}
% \caption{FCC: Constant Batch-Size}
% \end{subfigure}
% To regenerate the figure, from directory results/scaling
% python ../../../learn-topology/tools/plot_convergence.py 10/mnist/fully-connected-cliques/all/2021-03-12-09:13:27-CET ../mnist/fully-connected-cliques/all/2021-03-10-10:19:44-CET 1000/mnist/fully-connected-cliques/all/2021-03-14-17:56:26-CET --labels '10 nodes bsz=1280' '100 nodes bsz=128' '1000 nodes bsz=13' --legend 'lower right' --yaxis test-accuracy --save-figure ../../figures/d-cliques-mnist-scaling-fully-connected-cst-updates.png --ymin 80 --add-min-max
\begin{subfigure}[b]{0.7\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/d-cliques-mnist-scaling-fully-connected-cst-updates}
\caption{Fully-Connected (Cliques), $O(\frac{n^2}{c^2} + nc)$ edges}
% To regenerate the figure, from directory results/scaling
% python ../../../learn-topology/tools/plot_convergence.py 10/mnist/clique-ring/all/2021-03-13-18:22:01-CET ../mnist/fully-connected-cliques/all/2021-03-10-10:19:44-CET 1000/mnist/fractal-cliques/all/2021-03-14-17:41:59-CET --labels '10 nodes bsz=1280' '100 nodes bsz=128' '1000 nodes bsz=13' --legend 'lower right' --yaxis test-accuracy --save-figure ../../figures/d-cliques-mnist-scaling-fractal-cliques-cst-updates.png --ymin 80 --add-min-max
\begin{subfigure}[b]{0.7\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/d-cliques-mnist-scaling-fractal-cliques-cst-updates}
% To regenerate the figure, from directory results/scaling
% python ../../../learn-topology/tools/plot_convergence.py 10/mnist/clique-ring/all/2021-03-13-18:22:01-CET ../mnist/clique-ring/all/2021-03-10-18:14:35-CET 1000/mnist/clique-ring/all/2021-03-13-18:22:36-CET --labels '10 nodes bsz=1280' '100 nodes bsz=128' '1000 nodes bsz=13' --legend 'lower right' --yaxis test-accuracy --save-figure ../../figures/d-cliques-mnist-scaling-clique-ring-cst-updates.png --ymin 80 --add-min-max
\begin{subfigure}[b]{0.7\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/d-cliques-mnist-scaling-clique-ring-cst-updates}
\caption{\label{fig:d-cliques-mnist-scaling-fully-connected} MNIST: D-Clique Scaling Behaviour, where $n$ is the number of nodes, and $c$ the size of a clique (Constant Updates per Epoch).}
\begin{subfigure}[htb]{0.48\textwidth}
% To regenerate the figure, from directory results/mnist
% python ../../../learn-topology/tools/plot_convergence.py fully-connected-cliques/all/2021-03-10-10:19:44-CET no-init-no-clique-avg/fully-connected-cliques/all/2021-03-12-11:12:49-CET random-10/all/2021-03-17-20:28:12-CET random-10-diverse/all/2021-03-17-20:28:35-CET random-10-diverse-unbiased-grad/all/2021-03-17-20:29:04-CET --labels 'd-clique (fcc)' 'd-clique (fcc) no clique avg. no uniform init.' '10 random edges' '10 random edges (all classes represented)' '10 random edges (all classes repr.) with unbiased grad.' --add-min-max --legend 'lower right' --ymin 88 --ymax 92.5 --yaxis test-accuracy --save-figure ../../figures/d-cliques-mnist-linear-comparison-to-non-clustered-topologies.png
\includegraphics[width=\textwidth]{figures/d-cliques-mnist-linear-comparison-to-non-clustered-topologies}
\caption{\label{fig:d-cliques-mnist-linear-comparison-to-non-clustered-topologies} Linear Model}
\end{subfigure}
\hfill
\begin{subfigure}[htb]{0.48\textwidth}
% To regenerate the figure, from directory results/mnist
% python ../../../learn-topology/tools/plot_convergence.py fully-connected-cliques/all/2021-03-10-10:19:44-CET no-init-no-clique-avg/fully-connected-cliques/all/2021-03-12-11:12:49-CET random-10/all/2021-03-17-20:28:12-CET random-10-diverse/all/2021-03-17-20:28:35-CET random-10-diverse-unbiased-grad/all/2021-03-17-20:29:04-CET --labels 'd-clique (fcc)' 'd-clique (fcc) no clique avg. no uniform init.' '10 random edges' '10 random edges (all classes represented)' '10 random edges (all classes repr.) with unbiased grad.' --add-min-max --legend 'upper right' --ymax 0.7 --yaxis scattering --save-figure ../../figures/d-cliques-mnist-linear-comparison-to-non-clustered-topologies-scattering.png
\centering
\includegraphics[width=\textwidth]{figures/d-cliques-mnist-linear-comparison-to-non-clustered-topologies-scattering}
\caption{\label{fig:d-cliques-mnist-linear-comparison-to-non-clustered-topologies-scattering} Linear Model (Scattering)}
\end{subfigure}
\\
\begin{subfigure}[htb]{0.48\textwidth}
% To regenerate the figure, from directory results/mnist/gn-lenet
% python ../../../../learn-topology/tools/plot_convergence.py no-init/all/2021-03-22-21:39:54-CET no-init-no-clique-avg/all/2021-03-22-21:40:16-CET random-10/all/2021-03-22-21:41:06-CET random-10-diverse/all/2021-03-22-21:41:46-CET random-10-diverse-unbiased-grad/all/2021-03-22-21:42:04-CET --legend 'lower right' --add-min-max --labels 'd-clique (fcc) clique avg.' 'd-clique (fcc) no clique avg.' '10 random edges' '10 random edges (all classes repr.)' '10 random edges (all classes repr.) with unbiased grad.' --ymin 80 --yaxis test-accuracy --save-figure ../../../figures/d-cliques-mnist-lenet-comparison-to-non-clustered-topologies.png
\includegraphics[width=\textwidth]{figures/d-cliques-mnist-lenet-comparison-to-non-clustered-topologies}
\caption{\label{fig:d-cliques-mnist-lenet-comparison-to-non-clustered-topologies} LeNet Model}
\end{subfigure}
\hfill
\begin{subfigure}[htb]{0.48\textwidth}
% To regenerate the figure, from directory results/mnist/gn-lenet
% python ../../../../learn-topology/tools/plot_convergence.py no-init/all/2021-03-22-21:39:54-CET no-init-no-clique-avg/all/2021-03-22-21:40:16-CET random-10/all/2021-03-22-21:41:06-CET random-10-diverse/all/2021-03-22-21:41:46-CET random-10-diverse-unbiased-grad/all/2021-03-22-21:42:04-CET --legend 'upper right' --add-min-max --labels 'd-clique (fcc) clique avg.' 'd-clique (fcc) no clique avg.' '10 random edges' '10 random edges (all classes repr.)' '10 random edges (all classes repr.) with unbiased grad.' --ymax 0.7 --yaxis scattering --save-figure ../../../figures/d-cliques-mnist-lenet-comparison-to-non-clustered-topologies-scattering.png
\includegraphics[width=\textwidth]{figures/d-cliques-mnist-lenet-comparison-to-non-clustered-topologies-scattering}
\caption{\label{fig:d-cliques-mnist-lenet-comparison-to-non-clustered-topologies-scattering} LeNet Model (Scattering)}
\end{subfigure}
\caption{\label{fig:d-cliques-mnist-comparison-to-non-clustered-topologies} MNIST: Comparison to non-Clustered Topologies}
\end{figure}
\begin{itemize}
\item Clustering does not seem to make a difference in MNIST, even when using a higher-capacity model (LeNet) instead of a linear model. (Fig.\ref{fig:d-cliques-mnist-comparison-to-non-clustered-topologies})
\end{itemize}
\begin{figure}[htbp]
\centering
\begin{subfigure}[b]{0.48\textwidth}
\centering
%\includegraphics[width=0.7\textwidth]{figures/d-cliques-mnist-vs-1-node-test-accuracy}
\caption{\label{fig:mnist-clique-clustering-ring} Ring}
\end{subfigure}
% To regenerate the figure, from directory results/mnist
% python ../../../learn-topology/tools/plot_convergence.py no-init/fully-connected-cliques/all/2021-03-12-11:12:01-CET rm-1-edge/all/2021-03-18-17:28:27-CET rm-5-edges/all/2021-03-18-17:29:10-CET rm-1-edge-unbiased-grad/all/2021-03-18-17:28:47-CET rm-5-edges-unbiased-grad/all/2021-03-18-17:29:36-CET --add-min-max --ymin 85 --ymax 92.5 --legend 'lower right' --yaxis test-accuracy --labels 'fcc with clique grad.' 'fcc -1 edge/clique, no clique avg.' 'fcc -5 edges/clique, no clique avg.' 'fcc -1 edge/clique, clique avg.' 'fcc -5 edges/clique, clique avg.' --save-figure ../../figures/d-cliques-mnist-clique-clustering-fcc.png
\begin{subfigure}[b]{0.65\textwidth}
\includegraphics[width=\textwidth]{figures/d-cliques-mnist-clique-clustering-fcc}
\caption{\label{fig:d-cliques-mnist-clique-clustering-fcc} Fully-Connected D-Cliques}
\caption{\label{fig:d-cliques-mnist-clique-clustering} MNIST: Effect of Relaxed Intra-Clique Connectivity.}
\end{figure}
Momentum (CITE), which increases the magnitude of the components of the gradient that are shared between several consecutive steps, is critical for making convolutional networks converge quickly. However it relies on mini-batches to be IID, otherwise, it greatly increases variance between nodes and is actually detrimental to convergence speed.
Momentum can easily be used with D-Cliques, simply by calculating it from the clique-unbiased average gradient $g_i^{(k)}$ of Algorithm~\ref{Algorithm:Clique-Unbiased-D-PSGD}:
\begin{equation}
v_i^{(k)} \leftarrow m v_i^{(k-1)} + g_i^{(k)}
\end{equation}
It then suffices to modify the original gradient step to use momentum:
\begin{equation}
x_i^{(k-\frac{1}{2})} \leftarrow x_i^{(k-1)} - \gamma v_i^{(k)}
\end{equation}
In addition, it is important that all nodes are initialized with the same model value at the beginning. Otherwise, the random initialization of models introduces another source of variance that persists over many steps. In combination with D-Clique (Algorithm~\ref{Algorithm:Clique-Unbiased-D-PSGD}), this provides the convergence results of Figure~\ref{fig:d-cliques-cifar10-convolutional}. To assess how far this would be from an "optimal" solution, in which the delay introduced by multiple hops between nodes is completely removed, we also show the convergence speed of a single node that would compute its average gradient from all the samples obtained by all nodes in a single round. The results show that minus the variance introduced by the multiple hops between nodes, which slows the convergence of the distributed averaging of models, the convergence speed on average is close to the optimal, when the distributed average is computed exactly every step.
% To regenerate the figure, from directory results/cifar10
% python ../../../learn-topology/tools/plot_convergence.py 1-node-iid/all/2021-03-10-13:52:58-CET clique-ring/all/2021-03-10-11:58:43-CET fully-connected-cliques/all/2021-03-10-13:58:57-CET --add-min-max --yaxis training-loss --labels '1-node IID bsz=2000' '100-nodes non-IID D-Cliques (Ring) bsz=20' '100-nodes non-IID D-Cliques (Fully-Connected) bsz=20' --legend 'lower right' --save-figure ../../figures/d-cliques-cifar10-vs-1-node-training-loss.png
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/d-cliques-cifar10-vs-1-node-training-loss}
\caption{\label{fig:d-cliques-cifar10-training-loss} Training Loss}
\end{subfigure}
\hfill
% To regenerate the figure, from directory results/cifar10
% python ../../../learn-topology/tools/plot_convergence.py 1-node-iid/all/2021-03-10-13:52:58-CET clique-ring/all/2021-03-10-11:58:43-CET fully-connected-cliques/all/2021-03-10-13:58:57-CET --add-min-max --yaxis test-accuracy --labels '1-node IID bsz=2000' '100-nodes non-IID D-Cliques (Ring) bsz=20' '100-nodes non-IID D-Cliques (Fully-Connected) bsz=20' --legend 'lower right' --save-figure ../../figures/d-cliques-cifar10-vs-1-node-test-accuracy.png
\includegraphics[width=\textwidth]{figures/d-cliques-cifar10-vs-1-node-test-accuracy}
\caption{\label{fig:d-cliques-cifar10-test-accuracy} Test Accuracy}
Erick Lavoie
committed
\caption{\label{fig:d-cliques-cifar10-convolutional} D-Cliques Convergence Speed with Convolutional Network on CIFAR10 (100 nodes, Constant Updates per Epoch).}
\end{figure}
\begin{figure}[htbp]
\centering
% To regenerate the figure, from directory results/cifar10
% python ../../../learn-topology/tools/plot_convergence.py 1-node-iid/all/2021-03-10-13:52:58-CET ../scaling/1000/cifar10/fully-connected-cliques/all/2021-03-14-17:41:20-CET ../scaling/1000/cifar10/smallworld-logn-cliques/all/2021-03-23-22:13:57-CET ../scaling/1000/cifar10/fractal-cliques/all/2021-03-14-17:42:46-CET ../scaling/1000/cifar10/clique-ring/all/2021-03-14-09:55:24-CET --add-min-max --yaxis training-loss --labels '1-node IID' 'd-cliques (fully-connected cliques)' 'd-cliques (smallworld)' 'd-cliques (fractal)' 'd-cliques (ring)' --legend 'upper right' --ymax 3 --save-figure ../../figures/d-cliques-cifar10-1000-vs-1-node-training-loss.png
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/d-cliques-cifar10-1000-vs-1-node-training-loss}
\caption{\label{fig:d-cliques-cifar10-1000-vs-1-node-training-loss} Training Loss}
\end{subfigure}
\hfill
% To regenerate the figure, from directory results/cifar10
% python ../../../learn-topology/tools/plot_convergence.py 1-node-iid/all/2021-03-10-13:52:58-CET ../scaling/1000/cifar10/fully-connected-cliques/all/2021-03-14-17:41:20-CET ../scaling/1000/cifar10/smallworld-logn-cliques/all/2021-03-23-22:13:57-CET ../scaling/1000/cifar10/fractal-cliques/all/2021-03-14-17:42:46-CET ../scaling/1000/cifar10/clique-ring/all/2021-03-14-09:55:24-CET --add-min-max --yaxis test-accuracy --labels '1-node IID' 'd-cliques (fully-connected cliques)' 'd-cliques (smallworld)' 'd-cliques (fractal)' 'd-cliques (ring)' --legend 'lower right' --save-figure ../../figures/d-cliques-cifar10-1000-vs-1-node-test-accuracy.png
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/d-cliques-cifar10-1000-vs-1-node-test-accuracy}
\caption{\label{fig:d-cliques-cifar10-1000-vs-1-node-test-accuracy} Test Accuracy}
\end{subfigure}
Erick Lavoie
committed
\caption{\label{fig:d-cliques-cifar10-convolutional} D-Cliques Convergence Speed with Convolutional Network on CIFAR10 (1000 nodes, Constant Updates per Epoch).}
\begin{figure}[htbp]
\centering
% To regenerate the figure, from directory results/cifar10
% python ../../../learn-topology/tools/plot_convergence.py clique-ring/all/2021-03-10-11:58:43-CET no-init/clique-ring/all/2021-03-13-18:28:30-CET no-clique-avg/clique-ring/all/2021-03-13-18:27:09-CET no-init-no-clique-avg/clique-ring/all/2021-03-13-18:29:58-CET --add-min-max --yaxis test-accuracy --labels 'with clique avg., with uniform init.' 'with clique avg., without uniform init.' 'without clique avg., with uniform init.' 'without clique avg., without uniform init.' --legend 'lower right' --ymax 75 --save-figure ../../figures/d-cliques-cifar10-init-clique-avg-effect-ring-test-accuracy.png
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/d-cliques-cifar10-init-clique-avg-effect-ring-test-accuracy}
\caption{\label{fig:d-cliques-cifar10-init-clique-avg-effect-ring-test-accuracy} Ring}
\end{subfigure}
% To regenerate the figure, from directory results/cifar10
%python ../../../learn-topology/tools/plot_convergence.py fully-connected-cliques/all/2021-03-10-13:58:57-CET no-init/fully-connected-cliques/all/2021-03-13-18:32:55-CET no-clique-avg/fully-connected-cliques/all/2021-03-13-18:31:36-CET no-init-no-clique-avg/fully-connected-cliques/all/2021-03-13-18:34:35-CET --add-min-max --yaxis test-accuracy --labels 'with clique avg., with uniform init.' 'with clique avg., without uniform init.' 'without clique avg., with uniform init.' 'without clique avg., without uniform init.' --legend 'lower right' --ymax 75 --save-figure ../../figures/d-cliques-cifar10-init-clique-avg-effect-fcc-test-accuracy.png
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/d-cliques-cifar10-init-clique-avg-effect-fcc-test-accuracy}
\caption{\label{fig:d-cliques-cifar10-init-clique-avg-effect-fcc-test-accuracy} Fully-Connected}
\end{subfigure}
\caption{\label{fig:d-cliques-cifar10-init-clique-avg-effect} CIFAR10: Effects of Clique Averaging and Uniform Initialization on Convergence Speed. (100 nodes, non-IID, D-Cliques, bsz=20)}
\end{figure}
\begin{figure}[htbp]
\centering
% To regenerate the figure, from directory results/scaling
% python ../../../learn-topology/tools/plot_convergence.py ../cifar10/1-node-iid/all/2021-03-10-13:52:58-CET 10/cifar10/fully-connected-cliques/all/2021-03-13-19:06:02-CET ../cifar10/fully-connected-cliques/all/2021-03-10-13:58:57-CET 1000/cifar10/fully-connected-cliques/all/2021-03-14-17:41:20-CET --labels '1 node IID bsz=2000' '10 nodes non-IID bsz=200' '100 nodes non-IID bsz=20' '1000 nodes non-IID bsz=2' --legend 'lower right' --yaxis test-accuracy --save-figure ../../figures/d-cliques-cifar10-scaling-fully-connected-cst-updates.png --add-min-max
\begin{subfigure}[b]{0.7\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/d-cliques-cifar10-scaling-fully-connected-cst-updates}
\caption{Fully-Connected (Cliques), $O(\frac{n^2}{c^2} + nc)$ edges}
\end{subfigure}
% To regenerate the figure, from directory results/scaling
% python ../../../learn-topology/tools/plot_convergence.py ../cifar10/1-node-iid/all/2021-03-10-13:52:58-CET 10/cifar10/fully-connected-cliques/all/2021-03-13-19:06:02-CET ../cifar10/fully-connected-cliques/all/2021-03-10-13:58:57-CET 1000/cifar10/fractal-cliques/all/2021-03-14-17:42:46-CET --labels '1 node IID bsz=2000' '10 nodes non-IID bsz=200' '100 nodes non-IID bsz=20' '1000 nodes non-IID bsz=2' --legend 'lower right' --yaxis test-accuracy --save-figure ../../figures/d-cliques-cifar10-scaling-fractal-cliques-cst-updates.png --add-min-max
\includegraphics[width=\textwidth]{figures/d-cliques-cifar10-scaling-fractal-cliques-cst-updates}
\caption{Fractal, $O(nc)$ edges}
\end{subfigure}
% To regenerate the figure, from directory results/scaling
% python ../../../learn-topology/tools/plot_convergence.py ../cifar10/1-node-iid/all/2021-03-10-13:52:58-CET 10/cifar10/fully-connected-cliques/all/2021-03-13-19:06:02-CET ../cifar10/clique-ring/all/2021-03-10-11:58:43-CET 1000/cifar10/clique-ring/all/2021-03-14-09:55:24-CET --labels '1 node IID bsz=2000' '10 nodes non-IID bsz=200' '100 nodes non-IID bsz=20' '1000 nodes non-IID bsz=2' --legend 'lower right' --yaxis test-accuracy --save-figure ../../figures/d-cliques-cifar10-scaling-clique-ring-cst-updates.png --add-min-max
\begin{subfigure}[b]{0.7\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/d-cliques-cifar10-scaling-clique-ring-cst-updates}
\caption{Ring, $O(n)$ edges}
\end{subfigure}
\caption{\label{fig:d-cliques-cifar10-scaling-fully-connected} CIFAR10: D-Clique Scaling Behaviour, where $n$ is the number of nodes, and $c$ the size of a clique (Constant Updates per Epoch).}
% To regenerate the figure, from directory results/cifar10
% python ../../../learn-topology/tools/plot_convergence.py fully-connected-cliques/all/2021-03-10-13:58:57-CET no-init-no-clique-avg/fully-connected-cliques/all/2021-03-13-18:34:35-CET random-10/all/2021-03-17-20:30:03-CET random-10-diverse/all/2021-03-17-20:30:41-CET random-10-diverse-unbiased-gradient/all/2021-03-17-20:31:14-CET random-10-diverse-unbiased-gradient-uniform-init/all/2021-03-17-20:31:41-CET --labels 'd-clique (fcc) clique avg., uniform init.' 'd-clique (fcc) no clique avg. no uniform init.' '10 random edges' '10 random edges (all classes repr.)' '10 random (all classes repr.) with unbiased grad.' '10 random (all classes repr.) with unbiased grad., uniform init.' --add-min-max --legend 'upper left' --yaxis test-accuracy --save-figure ../../figures/d-cliques-cifar10-linear-comparison-to-non-clustered-topologies.png --ymax 100
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/d-cliques-cifar10-linear-comparison-to-non-clustered-topologies}
\caption{LeNet Model: Convergence Speed}
\end{subfigure}
\hfill
% To regenerate the figure, from directory results/cifar10
% python ../../../learn-topology/tools/plot_convergence.py fully-connected-cliques/all/2021-03-10-13:58:57-CET no-init-no-clique-avg/fully-connected-cliques/all/2021-03-13-18:34:35-CET random-10/all/2021-03-17-20:30:03-CET random-10-diverse/all/2021-03-17-20:30:41-CET random-10-diverse-unbiased-gradient/all/2021-03-17-20:31:14-CET random-10-diverse-unbiased-gradient-uniform-init/all/2021-03-17-20:31:41-CET --labels 'd-clique (fcc) clique avg., uniform init.' 'd-clique (fcc) no clique avg. no uniform init.' '10 random edges' '10 random edges (all classes repr.)' '10 random (all classes repr.) with unbiased grad.' '10 random (all classes repr.) with unbiased grad., uniform init.' --add-min-max --legend 'upper right' --yaxis scattering --save-figure ../../figures/d-cliques-cifar10-linear-comparison-to-non-clustered-topologies-scattering.png --ymax 0.7
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/d-cliques-cifar10-linear-comparison-to-non-clustered-topologies-scattering}
\caption{\label{fig:d-cliques-cifar10-linear-comparison-to-non-clustered-topologies-scattering} LeNet Model: Scattering}
\end{subfigure}
\caption{\label{fig:d-cliques-cifar10-linear-comparison-to-non-clustered-topologies} CIFAR10: Comparison to non-Clustered Topologies}
\end{figure}
\begin{subfigure}[b]{0.48\textwidth}
\centering
%\includegraphics[width=0.7\textwidth]{figures/d-cliques-cifar10-vs-1-node-test-accuracy}
\caption{\label{fig:cifar10-clique-clustering-ring} Ring}
\end{subfigure}
% To regenerate the figure, from directory results/cifar10
% python ../../../learn-topology/tools/plot_convergence.py no-init/fully-connected-cliques/all/2021-03-13-18:32:55-CET rm-1-edge/all/2021-03-18-17:29:58-CET rm-5-edges/all/2021-03-18-17:30:38-CET rm-1-edge-unbiased-grad/all/2021-03-18-17:30:17-CET rm-5-edges-unbiased-grad/all/2021-03-18-17:31:04-CET --add-min-max --ymax 80 --legend 'upper left' --yaxis test-accuracy --labels 'fcc, clique grad.' 'fcc -1 edge/clique, no clique grad.' 'fcc -5 edges/clique, no clique grad.' 'fcc -1 edge/clique, clique grad.' 'fcc -5 edges/clique, clique grad.' --save-figure ../../figures/d-cliques-cifar10-clique-clustering-fcc.png
\begin{subfigure}[b]{0.65\textwidth}
\includegraphics[width=\textwidth]{figures/d-cliques-cifar10-clique-clustering-fcc}
\caption{\label{fig:d-cliques-cifar10-clique-clustering-fcc} Fully-Connected D-Cliques}
\end{subfigure}
\caption{\label{fig:d-cliques-cifar10-clique-clustering} CIFAR10: Effect of Relaxed Intra-Clique Connectivity.}
\end{figure}
\begin{itemize}
\item Except for the random 10 topology, convergence speed seems to be correlated with scattering in CIFAR-10 with LeNet model (Fig.\ref{fig:d-cliques-cifar10-linear-comparison-to-non-clustered-topologies}). There is also more difference between topologies both in convergence speed and scattering than for MNIST (Fig.~\ref{fig:d-cliques-mnist-comparison-to-non-clustered-topologies}). Scattering computed similar to Consensus Control for Decentralized Deep Learning~\cite{kong2021consensus}.
\end{itemize}
\section{XP Suppl.}
\begin{itemize}
\item Test topology en n-log n https://dl.acm.org/doi/10.1145/335305.335325
\item Enlever les mentions à l'initialisation uniforme
\aurelien{not sure yet if it is better to have this section here or earlier,
we'll see}
\aurelien{TODO: where to place TornadoAggregate and related refs?}
\paragraph{Dealing with non-IID data in server-based FL.}
Dealing with non-IID data in server-based FL has
recently attracted a lot of interest. While non-IID data is not an issue if
clients send their parameters to the server after each gradient update,
problems arise when one seeks to reduce
the number of communication rounds by allowing each participant to perform
multiple local updates, as in the popular FedAvg algorithm
\cite{mcmahan2016communication}. This led to the design of extensions that are
specifically designed to mitigate the impact of non-IID data when performing
multiple local updates, using adaptive sampling \cite{quagmire}, update
corrections \cite{scaffold} or regularization in the local objective
\cite{fedprox}. Another direction is to embrace the non-IID scenario by
learning personalized models for each client
\cite{smith2017federated,perso_fl_mean,maml,moreau}.
\paragraph{Dealing with non-IID data in fully decentralized FL.}
Non-IID data is known to negatively impact the convergence speed
of fully decentralized FL algorithms in practice \cite{jelasity}. This
motivated the design of algorithms with modified updates based on variance
reduction \cite{tang18a}, momentum correction \cite{momentum_noniid},
cross-gradient
aggregation \cite{cross_gradient}, or multiple averaging steps
between updates (see \cite{consensus_distance} and references therein). These
algorithms
typically require additional communication and/or computation.\footnote{We
also observed that \cite{tang18a} is subject to numerical
instabilities when run on topologies other than rings and grids. When
the rows and columns of $W$ do not exactly
sum to $1$ (due to finite precision), these small differences get amplified by
the proposed updates and make the algorithm diverge.}\aurelien{emphasize that
they only do small scale experiments}
% non-IID known to be a problem for fully decentralized FL. cf Jelasity paper
% D2 and other recent papers on modifying updates: Quasi-Global Momentum,
% Cross-Gradient Aggregation
% papers using multiple averaging steps
% also our personalized papers
% D2 \cite{tang18a}: numerically unstable when $W_{ij}$ rows and columns do not exactly
% sum to $1$, as the small differences are amplified in a positive feedback loop. More work is therefore required on the algorithm to make it usable with a wider variety of topologies. In comparison, D-cliques do not modify the SGD algorithm and instead simply removes some neighbor contributions that would otherwise bias the direction of the gradient. D-Cliques with D-PSGD are therefore as tolerant to ill-conditioned $W_{ij}$ matrices as regular D-PSGD in an IID setting.
In contrast, D-cliques focuses on the design of a sparse topology which is
able to compensate for the effect of non-IID data. We do not modify the simple
and efficient D-SGD
algorithm \cite{lian2017d-psgd} beyond removing some neighbor
contributions
that would otherwise bias the direction of the gradient.
\aurelien{add personalized models - or merge all that in specific paragraph}
% An originality of our approach is to focus on the effect of topology
% level without significantly changing the original simple and efficient D-SGD
% algorithm \cite{lian2017d-psgd}. Other work to mitigate the effect of non-IID
% data on decentralized algorithms are based on performing modified updates (eg
% with variance reduction) or multiple averaging steps.
\paragraph{Impact of topology in fully decentralized FL.} It is well
known
that the choice of network topology can affect the
convergence of fully decentralized algorithms: this is typically accounted
for in the theoretical convergence rate by a dependence on the spectral gap of
the network, see for instance
\cite{Duchi2012a,Colin2016a,lian2017d-psgd,Nedic18}.
However, for IID data, practice contradicts these classic
results: fully decentralized algorithms converge essentially as fast
on sparse topologies like rings or grids as they do on a fully connected
graph \cite{lian2017d-psgd,Lian2018}. Recent work
\cite{neglia2020,consensus_distance} sheds light on this phenomenon with refined convergence analyses based on differences between gradients or parameters across nodes, which are typically
smaller in the IID case. However, these results do not give any clear insight
regarding the role of the topology in the non-IID case. We note that some work
has gone into designing efficient topologies to optimize the use of
network resources (see e.g., \cite{marfoq}), but this is done independently
of how data is distributed across nodes. In summary, the role
of topology in the non-IID data scenario is not well understood and we are not
aware of prior work focusing on this question. Our work shows that an
appropriate choice of data-dependent topology can effectively compensate for
non-IID data.
\section{Conclusion}
Erick Lavoie
committed
%\section{Future Work}
%\begin{itemize}
% \item Non-uniform Class Representation
% \item End-to-End Wall-Clock Training Time, including Clique Formation
% \item Comparison to Shuffling Data in a Data Center
% \item Behaviour in the Presence of Churn
% \item Relaxing Clique Connectivity: Randomly choose a subset of clique neighbours to compute average gradient.
%\end{itemize}
\section{Credits}
%
% ---- Bibliography ----
%
% BibTeX users should specify bibliography style 'splncs04'.
% References will then be sorted and formatted in the correct style.
%
\bibliographystyle{splncs04}
\bibliography{main}
\end{document}