Newer
Older
%%%%%%%% mlsys 2022 EXAMPLE LATEX SUBMISSION FILE %%%%%%%%%%%%%%%%%
\documentclass{article}
% Recommended, but optional, packages for figures and better typesetting:
%\usepackage{microtype}
\usepackage{graphicx}
\usepackage{booktabs} % for professional tables
\usepackage[utf8]{inputenc}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{xcolor}
\usepackage{soul}
%\usepackage[noend]{algpseudocode}
\usepackage{dsfont}
\usepackage{caption}
\usepackage{subcaption}
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
% hyperref makes hyperlinks in the resulting PDF.
% If your build breaks (sometimes temporarily if a hyperlink spans a page)
% please comment out the following usepackage line and replace
% \usepackage{mlsys2022} with \usepackage[nohyperref]{mlsys2022} above.
\usepackage{hyperref}
% Attempt to make hyperref and algorithmic work together better:
\newcommand{\theHalgorithm}{\arabic{algorithm}}
% Use the following line for the initial blind version submitted for review:
%\usepackage{mlsys2022}
% If accepted, instead use the following line for the camera-ready submission:
\usepackage[accepted]{mlsys2022}
% The \mlsystitle you define below is probably too long as a header.
% Therefore, a short form for the running title is supplied here:
%\mlsystitlerunning{D-Cliques}
\begin{document}
\twocolumn[
\mlsystitle{D-Cliques: Compensating NonIIDness in Decentralized Federated Learning
with Topology}
% It is OKAY to include author information, even for blind
% submissions: the style file will automatically remove it for you
% unless you've provided the [accepted] option to the mlsys2022
% package.
% List of affiliations: The first argument should be a (short)
% identifier you will use later to specify author affiliations
% Academic affiliations should list Department, University, City, Region, Country
% Industry affiliations should list Company, City, Region, Country
% You can specify symbols, otherwise they are numbered in order.
% Ideally, you should not use this facility. Affiliations will be numbered
% in order of appearance and this is the preferred way.
%\mlsyssetsymbol{equal}{*}
\begin{mlsysauthorlist}
\mlsysauthor{Aur\'elien Bellet}{inria-lille}
\mlsysauthor{Anne-Marie Kermarrec}{epfl}
\mlsysauthor{Erick Lavoie}{epfl}
\end{mlsysauthorlist}
\mlsysaffiliation{epfl}{EPFL, Lausanne, Switzerland}
\mlsysaffiliation{inria-lille}{Inria, Lille, France}
\mlsyscorrespondingauthor{Erick Lavoie}{erick.lavoie@epfl.ch}
% You may provide any keywords that you
% find helpful for describing your paper; these are used to populate
% the "keywords" metadata in the PDF but will not be shown in the document
\mlsyskeywords{Decentralized Learning, Federated Learning, Topology,
Non-IID Data, Stochastic Gradient Descent}
\vskip 0.3in
\begin{abstract}
%This document provides a basic paper template and submission guidelines.
%Abstracts must be a single paragraph, ideally between 4--6 sentences long.
%Gross violations will trigger corrections at the camera-ready phase.
The convergence speed of machine learning models trained with Federated
Learning is significantly affected by non-independent and identically
distributed (non-IID) data partitions, even more so in a fully decentralized
setting without a central server. In this paper, we show that the impact of
\textit{local class bias}, an important type of data non-IIDness, can be
significantly reduced by carefully designing
the underlying communication topology. We present D-Cliques, a novel topology
that reduces gradient bias by grouping nodes in interconnected cliques such
that the local joint distribution in a clique is representative of the global
class distribution. We also show how to adapt the updates of decentralized SGD
to obtain unbiased gradients and implement an effective momentum with
D-Cliques. Our empirical evaluation on MNIST and CIFAR10 demonstrates that our approach
provides similar convergence speed as a fully-connected topology with a
significant reduction in the number of edges and messages. In a 1000-node
topology, D-Cliques requires 98\% less edges and 96\% less total messages,
with further possible gains using a small-world topology across cliques.
\end{abstract}
]
% this must go after the closing bracket ] following \twocolumn[ ...
% This command actually creates the footnote in the first column
% listing the affiliations and the copyright notice.
% The command takes one argument, which is text to display at the start of the footnote.
% The \mlsysEqualContribution command is standard text for equal contribution.
% Remove it (just {}) if you do not need this facility.
%\printAffiliationsAndNotice{} % leave blank if no need to mention equal contribution
\printAffiliationsAndNotice{\mlsysEqualContribution} % otherwise use the standard text.
\section{Introduction}
Machine learning is currently shifting from a \emph{centralized}
paradigm, in which models are trained on data located on a single machine or
in a data center, to \emph{decentralized} ones.
Effectively, the latter paradigm closely matches the natural data distribution
in the numerous use-cases where data is collected and processed by several
independent
parties (hospitals, companies, personal devices...).
Federated Learning (FL) allows a set
of participants to collaboratively train machine learning models
on their joint
data while keeping it where it has been produced. Not only does this avoid
the costs of moving data, but it also mitigates privacy and confidentiality concerns~\cite{kairouz2019advances}.
Yet, working with natural data distributions introduces new challenges for
learning systems, as
local datasets
reflect the usage and production patterns specific to each participant: they are
\emph{not} independent and identically distributed
(non-IID). More specifically, the relative frequency of different classes of examples may significantly vary
across local datasets \cite{kairouz2019advances,quagmire}.
Therefore, one of the key challenges in FL is to design algorithms that
can efficiently deal with such non-IID data distributions
\cite{kairouz2019advances,fedprox,scaffold,quagmire}.
Federated learning algorithms can be classified into two categories depending
on the underlying network topology they run on. In server-based FL, the
network is organized according to a star topology: a central server orchestrates the training process by
iteratively aggregating model updates received from the participants
(\emph{clients}) and sending back the aggregated model \cite{mcmahan2016communication}. In contrast,
fully decentralized FL algorithms operate over an arbitrary network topology
where participants communicate only with their direct neighbors
in the network. A classic example of such algorithms is Decentralized
SGD (D-SGD) \cite{lian2017d-psgd}, in which participants alternate between
local SGD updates and model averaging with neighboring nodes.
In this paper, we focus on fully decentralized algorithms as they can
generally scale better to the large number of participants seen in ``cross-device''
applications \cite{kairouz2019advances}. Effectively, while a central
server may quickly become a bottleneck as the number of participants increases, the topology used in fully decentralized algorithms can remain sparse
enough such that all participants need only to communicate with a small number of other participants, i.e. nodes have small (constant or logarithmic) degree
\cite{lian2017d-psgd}. For IID data, recent work has shown both empirically
\cite{lian2017d-psgd,Lian2018} and theoretically \cite{neglia2020} that sparse
topologies like rings or grids do not significantly affect the convergence
speed compared to using denser topologies.
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
\centering
% From directory results/mnist
% python ../../../../Software/non-iid-topology-simulator/tools/plot_convergence.py ring/iid/all/2021-03-30-16:07:06-CEST ring/non-iid/all/2021-03-30-16:07:03-CEST --add-min-max --legend 'lower right' --yaxis test-accuracy --labels '100 nodes IID' '100 nodes non-IID' --save-figure ../../figures/ring-IID-vs-non-IID.png --font-size 20 --linestyles 'solid' 'dashed'
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=\textwidth]{../figures/ring-IID-vs-non-IID}
\caption{\label{fig:ring-IID-vs-non-IID} Ring}
\end{subfigure}
\quad
% From directory results/mnist
% python ../../../../Software/non-iid-topology-simulator/tools/plot_convergence.py grid/iid/all/2021-03-30-16:07:01-CEST grid/non-iid/all/2021-03-30-16:06:59-CEST --add-min-max --legend 'lower right' --yaxis test-accuracy --labels '100 nodes IID' '100 nodes non-IID' --save-figure ../../figures/grid-IID-vs-non-IID.png --font-size 20 --linestyles 'solid' 'dashed'
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=\textwidth]{../figures/grid-IID-vs-non-IID}
\caption{\label{fig:grid-IID-vs-non-IID} Grid}
\end{subfigure}
\quad
% From directory results/mnist
% python ../../../../Software/non-iid-topology-simulator/tools/plot_convergence.py fully-connected/iid/all/2021-03-30-16:07:20-CEST fully-connected/all/2021-03-10-09:25:19-CET --add-min-max --legend 'lower right' --yaxis test-accuracy --labels '100 nodes IID' '100 nodes non-IID' --save-figure ../../figures/fully-connected-IID-vs-non-IID.png --font-size 20 --linestyles 'solid' 'dashed'
\begin{subfigure}[b]{0.25\textwidth}
\centering
\includegraphics[width=\textwidth]{../figures/fully-connected-IID-vs-non-IID}
\caption{\label{fig:fully-connected-IID-vs-non-IID} Fully-connected}
\end{subfigure}
\caption{IID vs non-IID convergence speed of decentralized SGD for
logistic regression on
MNIST for different topologies. Bold lines show the average test
accuracy across nodes
while thin lines show the minimum
and maximum accuracy of individual nodes. While the effect of topology
is negligible for IID data, it is very significant in the
non-IID case. When fully-connected, both cases converge similarly. See
Section~\ref{section:experimental-settings} for details on
the experimental setup.}
\label{fig:iid-vs-non-iid-problem}
\end{figure*}
In contrast to the IID case however, our experiments demonstrate that \emph{the impact of topology is extremely significant for non-IID data}. This phenomenon is illustrated
in Figure~\ref{fig:iid-vs-non-iid-problem}: We observe that a ring or
a grid topology clearly jeopardizes the convergence speed as local
distributions do not have relative frequency of classes similar to the global
distribution, i.e. they exhibit \textit{local class bias}. We stress the fact
that, unlike in centralized FL
\cite{kairouz2019advances,scaffold,quagmire}, this
happens even when nodes perform a single local update before averaging the
model with their neighbors. In this paper, we address the following question:
\textit{Can we design sparse topologies with convergence
speed similar to the one obtained in a fully connected network under
a large number of participants with local class bias?}
Specifically, we make the following contributions:
(1) We propose D-Cliques, a sparse topology in which nodes are organized in
interconnected cliques, i.e. locally fully-connected sets of nodes, such that
the joint data distribution of each clique is close to that of the global
(IID) distribution; (2) We propose Greedy Swap, an algorithm for constructing
cliques efficiently in the presence of heterogeneity previously studied in the context of Federated Learning~\cite{mcmahan2016communication};
(3) We propose Clique Averaging, a modified version of
the standard D-SGD algorithm which decouples gradient averaging, used for
optimizing local models, from distributed averaging, used to ensure all models
converge, therefore reducing the bias introduced by inter-clique connections;
(4) We show how Clique Averaging can be used to implement unbiased momentum
that would otherwise be detrimental in the non-IID setting; (5) We
demonstrate
through an extensive experimental study that our approach removes the effect
of the local class bias on the MNIST~\cite{mnistWebsite} and CIFAR10~
\cite{krizhevsky2009learning} datasets, for training a linear model and a deep
convolutional network; (6) Finally, we demonstrate the scalability of our
approach by considering up to 1000-node networks, in contrast to most
previous work on fully decentralized learning that considers only a few tens
of nodes
\cite{tang18a,neglia2020,momentum_noniid,cross_gradient,consensus_distance}.
For instance, our results show that using D-Cliques in a 1000-node network
requires 98\% less edges ($18.9$ vs $999$ edges per participant on average),
thereby yielding a 96\% reduction in the total number of required messages
(37.8 messages per round per node on average instead of 999), to obtain a similar convergence speed as a fully-connected topology. Furthermore an additional 22\% improvement
% (14.5 edges per node on average instead of 18.9)
is possible when using a small-world inter-clique topology, with further potential gains at larger scales because of its quasilinear scaling ($O(n \log(n))$) in $n$, the number of nodes.
The rest of this paper is organized as follows \dots \todo{EL: Complete once structure stabilizes}
%We first present the problem
%statement and our methodology (Section~\ref{section:problem}). The D-Cliques
%design is presented in Section~\ref{section:d-cliques}) along with an
%empirical illustration of its benefits. In
%Section~\ref{section:clique-averaging-momentum}, we
%show how to further reduce bias with Clique Averaging and how to use it to
%implement momentum. We present the results of our extensive experimental
%study in Section~\ref{section:non-clustered}. We review some related work in
% Section~\ref{section:related-work}, and conclude with promising directions
% for future work in Section~\ref{section:conclusion}.
\section{Problem Statement}
\label{section:problem}
We consider a set $N = \{1, \dots, n \}$ of $n$ nodes seeking to
collaboratively solve a classification task with $L$ classes. Each node has access to a local dataset that
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
follows its own local distribution $D_i$. The goal is to find a global model
$x$ that performs well on the union of the local distributions by minimizing
the average training loss:
\begin{equation}
\min_{x} \frac{1}{n}\sum_{i=1}^{n} \mathds{E}_
{s_i \sim D_i} [F_i(x;s_i)],
\label{eq:dist-optimization-problem}
\end{equation}
where $s_i$ is a data example drawn from $D_i$ and $F_i$ is the loss function
on node $i$. Therefore, $\mathds{E}_{s_i \sim D_i} F_i(x;s_i)$ denotes the
expected loss of model $x$ on a random example $s_i$ drawn from $D_i$.
To collaboratively solve Problem \eqref{eq:dist-optimization-problem}, each
node can exchange messages with its neighbors in an undirected network graph
$G(N,E)$ where $\{i,j\}\in E$ denotes an edge (communication channel)
between nodes $i$ and $j$.
\subsection{Training Algorithm}
In this work, we use the popular Decentralized Stochastic
Gradient Descent algorithm, aka D-SGD~\cite{lian2017d-psgd}. As
shown in Algorithm~\ref{Algorithm:D-PSGD},
a single iteration of D-SGD at node $i$ consists of sampling a mini-batch
from its local distribution
$D_i$, updating its local model $x_i$ by taking a stochastic gradient descent
(SGD) step according to the mini-batch, and performing a weighted average of
its local model with those of its
neighbors.
This weighted average is defined by a
mixing matrix $W$, in which $W_{ij}$ corresponds to the weight of
the outgoing connection from node $i$ to $j$ and $W_{ij} = 0$ for $
\{i,j\}\notin
E$. To ensure that the local models converge on average to a stationary
point
of Problem
\eqref{eq:dist-optimization-problem}, $W$
must be doubly
stochastic ($\sum_{j \in N} W_{ij} = 1$ and $\sum_{j \in N} W_{ji} = 1$) and
symmetric, i.e. $W_{ij} = W_{ji}$~\cite{lian2017d-psgd}.
\begin{algorithm}[t]
\caption{D-SGD, Node $i$}
\label{Algorithm:D-PSGD}
\begin{algorithmic}[1]
\STATE \textbf{Require:} initial model parameters $x_i^{(0)}$,
learning rate $\gamma$, mixing weights $W$, mini-batch size $m$,
number of steps $K$
\FOR{$k = 1,\ldots, K$}
\STATE $s_i^{(k)} \gets \text{mini-batch sample of size $m$ drawn
from~} D_i$
\STATE $x_i^{(k-\frac{1}{2})} \gets x_i^{(k-1)} - \gamma \nabla F(x_i^{(k-1)}; s_i^{(k)})$
\STATE $x_i^{(k)} \gets \sum_{j \in N} W_{ji}^{(k)} x_j^{(k-\frac{1}{2})}$
\ENDFOR
\end{algorithmic}
\end{algorithm}
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
\section{D-Cliques: Creating Locally Representative Cliques}
\label{section:d-cliques}
In this section, we present the design of D-Cliques. To give an intuition of our approach, let us consider the neighborhood of a single node in a grid similar to that of Figure~\ref{fig:grid-IID-vs-non-IID}, represented on Figure~\ref{fig:grid-iid-vs-non-iid-neighbourhood}.
The colors of a node represent the different classes present in its local
dataset. In the IID setting (Figure~\ref{fig:grid-iid-neighbourhood}), each
node has examples of all classes in equal proportions. In the non-IID setting
(Figure~\ref{fig:grid-non-iid-neighbourhood}), each node has examples of only
a
single class and nodes are distributed randomly in the grid.
A single training step, from the point of view of the center node, is equivalent to sampling a mini-batch five times larger from the union of the local distributions of all illustrated nodes.
In the IID case, since gradients are computed from examples of all classes,
the resulting averaged gradient points in a direction that tends to reduce
the loss across all classes. In contrast, in the non-IID case, only a subset
of classes are
represented in the immediate neighborhood of the node, thus the gradients will
be biased towards these classes.
Importantly, as the distributed averaging algorithm takes several steps to
converge, this variance persists across iterations as the locally computed
gradients are far from the global average.\footnote{It is possible, but
very costly, to mitigate this by performing a sufficiently large number of
averaging steps between each gradient step.} This can significantly slow down
convergence speed to the point of making decentralized optimization
impractical.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.18\textwidth}
\centering
\includegraphics[width=\textwidth]{../figures/grid-iid-neighbourhood}
\caption{\label{fig:grid-iid-neighbourhood} IID}
\end{subfigure}
\begin{subfigure}[b]{0.18\textwidth}
\centering
\includegraphics[width=\textwidth]{../figures/grid-non-iid-neighbourhood}
\caption{\label{fig:grid-non-iid-neighbourhood} Non-IID}
\end{subfigure}
\caption{Neighborhood in an IID and non-IID grid.}
\label{fig:grid-iid-vs-non-iid-neighbourhood}
\end{figure}
In D-Cliques, we address the issues of non-iidness by carefully designing a
network topology composed of \textit{cliques} and \textit{inter-clique
connections}.
First, D-Cliques recover a balanced representation of classes, close to
that of the IID case, by constructing a topology such that each node $i \in N$ is
part of a \textit{clique} $C$ such that the clique distribution $D_C = \bigcup\limits_{\substack{i \in C}} D_i$ is close to that of the global distribution $D = \bigcup\limits_{\substack{i \in N}} D_i$. We measure the closeness of $D_C$ to $D$ using its \textit{skew}, i.e. the sum of the differences in the probabilities that a sample $(x,y)$ belongs to the same class in $L$ in $D_C$ and $D$:
\begin{equation}
\label{eq:skew}
\begin{split}
\textit{skew}(C) =\
\sum\limits_{\substack{l \in L}} | p(y = l~|(x,y) \in D_C) - \\ p(y = l~|(x,y) \in D) |
\end{split}
\end{equation}
Second, to ensure a global consensus and convergence,
\textit{inter-clique connections}
are introduced by connecting a small number of node pairs that are
part of different cliques. In the following, we introduce up to one inter-clique
connection per node such that each clique has exactly one
edge with all other cliques, see Figure~\ref{fig:d-cliques-figure} for the
corresponding D-Cliques network in the case of $n=100$ nodes and $c=10$
classes. We will explore sparser inter-clique topologies in Section~\ref{section:interclique-topologies}.
The mixing matrix $W$ required by D-SGD is obtained from standard
Metropolis-Hasting weights~\cite{xiao2004fast} computed from the above
topology, namely:
\begin{equation}
W_{ij} = \begin{cases}
\frac{1}{\max(\text{degree}(i), \text{degree}(j)) + 1} & \text{if}~i \neq
j \text{ and } \{i,j\}\in E,\\
1 - \sum_{j \neq i} W_{ij} & \text{if}~$i = j$, \\
0 & \text{otherwise}.
\end{cases}
\label{eq:metro}
\end{equation}
We construct D-Cliques by initializing cliques at random, using at most $M$
nodes to limit the intra-clique communication costs, then we
swap nodes between pairs of cliques chosen at random such that the swap
decreases the skew of that pair but keeps
the size of the cliques constant (see Algorithm~\ref{Algorithm:D-Clique-Construction}).
Only swaps that decrease the skew are performed, hence this algorithm can be
seen as a form of randomized greedy algorithm. We note that this algorithm only requires
the knowledge of the local class distribution at each node. For the sake of
simplicity, we assume that D-Cliques are constructed from the global
knowledge of these distributions, which can easily be obtained by
decentralized averaging in a pre-processing step.
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
\begin{algorithm}[h]
\caption{D-Cliques Construction: Greedy Swap}
\label{Algorithm:D-Clique-Construction}
\begin{algorithmic}[1]
\STATE \textbf{Require:} Max clique size $M$, Max steps $K$,
\STATE Set of all nodes $N = \{ 1, 2, \dots, n \}$,
\STATE $\textit{skew}(S)$: skew of subset $S \subseteq N$ compared to the global distribution (Eq.~\ref{eq:skew}),
\STATE $\textit{intra}(DC)$: edges within cliques $C \in DC$,
\STATE $\textit{inter}(DC)$: edges between $C_1,C_2 \in DC$ (Sec.~\ref{section:interclique-topologies}),
\STATE $\textit{weights}(E)$: set weights to edges in $E$ (Eq.~\ref{eq:metro}).
\STATE ~~
\STATE $DC \leftarrow []$ \COMMENT{Empty list}
\WHILE {$N \neq \emptyset$}
\STATE $C \leftarrow$ sample $M$ nodes from $N$ at random
\STATE $N \leftarrow N \setminus C$; $DC.append(C)$
\ENDWHILE
\FOR{$k \in \{1, \dots, K\}$}
\STATE $C_1,C_2 \leftarrow$ sample 2 from $DC$ at random
\STATE $\textit{swaps} \leftarrow []$
\FOR{$n_1 \in C_1, n_2 \in C_2$}
\STATE $s \leftarrow skew(C_1) + skew(C_2)$
\STATE $s' \leftarrow \textit{skew}(C_1-n_1+n_2) + \textit{skew}(C_2 -n_2+n_1)$
\IF {$s' < s$}
\STATE \textit{swaps}.append($(n_1, n_2)$)
\ENDIF
\ENDFOR
\IF {\#\textit{swaps} $> 0$}
\STATE $(n_1,n_2) \leftarrow$ sample 1 from $\textit{swaps}$ at random
\STATE $C_1 \leftarrow C_1 - n_1 + n_2; C_2 \leftarrow C_2 - n_2 + n1$
\ENDIF
\ENDFOR
\RETURN $(weights(\textit{intra}(DC) \cup \textit{inter}(DC)), DC)$
\end{algorithmic}
\end{algorithm}
The key idea of D-Cliques is that because the clique-level distribution $D_C$
is representative of the global distribution $D$,
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
the local models of nodes across cliques remain rather close. Therefore, a
sparse inter-clique topology can be used, significantly reducing the total
number of edges without slowing down the convergence. Furthermore, the degree
of each node in the network remains low and even, making the D-Cliques
topology very well-suited to decentralized federated learning.
\section{Optimizing with Clique Averaging and Momentum}
\label{section:clique-averaging-momentum}
In this section, we present Clique Averaging. This feature, when added to D-SGD,
removes the bias caused by the inter-cliques edges of
D-Cliques. We also show how it can be used to successfully implement momentum
for non-IID data.
\subsection{Clique Averaging: Debiasing Gradients from Inter-Clique Edges}
\label{section:clique-averaging}
While limiting the number of inter-clique connections reduces the
amount of messages traveling on the network, it also introduces its own
bias.
Figure~\ref{fig:connected-cliques-bias} illustrates the problem on the
simple case of two cliques connected by one inter-clique edge (here,
between the green node of the left clique and the pink node of the right
clique). Let us focus on node A. With weights computed as in \eqref{eq:metro},
node A's self-weight is $\frac{12}
{110}$, the weight between A and the green node connected to B is
$\frac{10}{110}$, and
all other neighbors of A have a weight of $\frac{11}{110}$. Therefore, the
gradient at A is biased towards its own class (pink) and against the green
class. A similar bias holds for all other nodes
without inter-clique edges with respect to their respective classes. For node
B, all its edge weights (including its self-weight) are equal to $\frac{1}
{11}$. However, the green class is represented twice (once as a clique
neighbor and once from the inter-clique edge), while all other classes are
represented only once. This biases the gradient toward the green class. The
combined effect of these two sources of bias is to increase the variance
of the local models across nodes.
\begin{figure}[t]
\centering
\includegraphics[width=0.3\textwidth]{../figures/connected-cliques-bias}
\caption{\label{fig:connected-cliques-bias} Illustrating the bias induced by
inter-clique connections (see main text).}
\end{figure}
We address this problem by adding \emph{Clique Averaging} to D-SGD
(Algorithm~\ref{Algorithm:Clique-Unbiased-D-PSGD}), which essentially
decouples gradient averaging from model averaging. The idea is to use only the
gradients of
neighbors within the same clique to compute the average gradient,
providing an equal representation to all classes. In contrast, all neighbors'
models, including those across inter-clique edges, participate in the model
averaging step as in the original version.
\begin{algorithm}[t]
\caption{D-SGD with Clique Averaging, Node $i$}
\label{Algorithm:Clique-Unbiased-D-PSGD}
\begin{algorithmic}[1]
\STATE \textbf{Require} initial model parameters $x_i^{(0)}$, learning
rate $\gamma$, mixing weights $W$, mini-batch size $m$, number of
steps $K$
\FOR{$k = 1,\ldots, K$}
\STATE $s_i^{(k)} \gets \text{mini-batch sample of size $m$ drawn
from~} D_i$
\STATE $g_i^{(k)} \gets \frac{1}{|\textit{Clique}(i)|}\sum_{j \in \textit{Clique(i)}} \nabla F(x_j^{(k-1)}; s_j^{(k)})$
\STATE $x_i^{(k-\frac{1}{2})} \gets x_i^{(k-1)} - \gamma g_i^{(k)}$
\STATE $x_i^{(k)} \gets \sum_{j \in N} W_{ji}^{(k)} x_j^{(k-\frac{1}{2})}$
\ENDFOR
\end{algorithmic}
\end{algorithm}
\subsection{Implementing Momentum with Clique Averaging}
\label{section:momentum}
Efficiently training high capacity models usually requires additional
optimization techniques. In particular, momentum~\cite{pmlr-v28-sutskever13}
increases the magnitude of the components of the gradient that are shared
between several consecutive steps, and is critical for deep convolutional networks like
LeNet~\cite{lecun1998gradient,quagmire} to converge quickly. However, a direct
application of momentum in a non-IID setting can actually be very detrimental.
As illustrated in Figure~\ref{fig:d-cliques-cifar10-momentum-non-iid-effect}
for the case of LeNet on CIFAR10 with 100 nodes, D-Cliques with momentum
even fails to converge. Not using momentum actually gives a faster
convergence, but there is a significant gap compared to the case of a single
IID node with momentum.
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
Clique Averaging (Section~\ref{section:clique-averaging})
allows us to compute an unbiased momentum from the
unbiased average gradient $g_i^{(k)}$ of Algorithm~\ref{Algorithm:Clique-Unbiased-D-PSGD}:
\begin{equation}
v_i^{(k)} \leftarrow m v_i^{(k-1)} + g_i^{(k)}
\end{equation}
It then suffices to modify the original gradient step to use momentum:
\begin{equation}
x_i^{(k-\frac{1}{2})} \leftarrow x_i^{(k-1)} - \gamma v_i^{(k)}
\end{equation}
\section{Comparative Evaluation and Extensions}
\label{section:non-clustered}
In this section, we first compare D-Cliques to alternative topologies to
confirm the relevance of our main design choices. Then,
we evaluate some extensions of D-Cliques to further reduce the number of
inter-clique connections so as to gracefully scale with the number of
nodes.
\subsection{Methodology}
\subsubsection{Non-IID assumptions.}
\label{section:non-iid-assumptions}
As demonstrated in Figure~\ref{fig:iid-vs-non-iid-problem}, lifting the
assumption of IID data significantly challenges the learning algorithm. In
this paper, we focus on an \textit{extreme case of local class bias}: we
consider that each node only has examples from a single class.
To isolate the effect of local class bias from other potentially compounding
factors, we make the following simplifying assumptions: (1) All classes are
equally represented in the global dataset; (2) All classes are represented on
the same number of nodes; (3) All nodes have the same number of examples.
We believe that these assumptions are reasonable in the context of our study
because: (1)
Global class
imbalance equally
affects the optimization process on a single node and is therefore not
specific to the decentralized setting; (2) Our results do not exploit specific
positions in the topology; (3) Imbalanced dataset sizes across nodes can be
addressed for instance by appropriately weighting the individual loss
functions. Our results can be extended to support additional compounding factors in future work.
\subsubsection{Experimental setup.}
\label{section:experimental-settings}
Our main goal is to provide a fair comparison of the convergence speed across
different topologies and algorithmic variations, in order to
show that our approach
can remove much of the effect of local class bias.
We experiment with two datasets: MNIST~\cite{mnistWebsite} and
CIFAR10~\cite{krizhevsky2009learning}, which both have $c=10$ classes.
For MNIST, we use 45k and 10k examples from the original 60k
training set for training and validation respectively. The remaining 5k
training examples were randomly removed to ensure all 10 classes are balanced
while ensuring that the dataset is evenly divisible across 100 and 1000 nodes.
We use all 10k examples of
the test set to measure prediction accuracy. For CIFAR10, classes are evenly
balanced: we use 45k/50k images of the original training set for training,
5k/50k for validation, and all 10k examples of the test set for measuring
prediction accuracy.
We
use a logistic regression classifier for MNIST, which
provides up to 92.5\% accuracy in the centralized setting.
For CIFAR10, we use a Group-Normalized variant of LeNet~\cite{quagmire}, a
deep convolutional network which achieves an accuracy of $72.3\%$ in the
centralized setting.
These models are thus reasonably accurate (which is sufficient to
study the effect of the topology) while being sufficiently fast to train in a
fully decentralized setting and simple enough to configure and analyze.
Regarding hyper-parameters, we jointly optimize the learning rate and
mini-batch size on the
validation set for 100 nodes, obtaining respectively $0.1$ and $128$ for
MNIST and $0.002$ and $20$ for CIFAR10.
For CIFAR10, we additionally use a momentum of $0.9$.
We evaluate 100- and 1000-node networks by creating multiple models in memory and simulating the exchange of messages between nodes.
To ignore the impact of distributed execution strategies and system
optimization techniques, we report the test accuracy of all nodes (min, max,
average) as a function of the number of times each example of the dataset has
been sampled by a node, i.e. an \textit{epoch}. This is equivalent to the classic case of a single node sampling the full distribution.
To further make results comparable across different number of nodes, we lower
the batch size proportionally to the number of nodes added, and inversely,
e.g. on MNIST, 128 with 100 nodes vs. 13 with 1000 nodes. This
ensures the same number of model updates and averaging per epoch, which is
important to have a fair comparison.\footnote{Updating and averaging models
after every example can eliminate the impact of local class bias. However, the
resulting communication overhead is impractical.}
Finally, we compare our results against an ideal baseline: either a
fully-connected network topology with the same number of nodes or a single IID
node. In both cases, the topology has no effect on
the optimization. For a certain choice of number of nodes and
mini-batch size, both approaches are equivalent.
\subsection{Main Results}
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.20\textwidth}
\centering
\includegraphics[width=\textwidth]{../figures/fully-connected-cliques}
\caption{\label{fig:d-cliques-figure} D-Cliques (fully-connected
cliques)}
\end{subfigure}
\hfill
% To regenerate figure, from results/mnist
% python ../../../../Software/non-iid-topology-simulator/tools/plot_convergence.py fully-connected/all/2021-03-10-09:25:19-CET no-init-no-clique-avg/fully-connected-cliques/all/2021-03-12-11:12:49-CET --add-min-max --yaxis test-accuracy --ymin 80 --ymax 92.5 --labels '100 nodes non-IID fully-connected' '100 nodes non-IID d-cliques' --save-figure ../../figures/d-cliques-mnist-vs-fully-connected.png --legend 'lower right' --font-size 16 --linestyles 'solid' 'dashed'
\begin{subfigure}[b]{0.26\textwidth}
\centering
\includegraphics[width=\textwidth]{../figures/d-cliques-mnist-vs-fully-connected.png}
\caption{\label{fig:d-cliques-example-convergence-speed} Convergence Speed
on MNIST}
\end{subfigure}
\caption{\label{fig:d-cliques-example} D-Cliques topology and convergence
speed on MNIST.}
\end{figure}
Figure~\ref{fig:d-cliques-example-convergence-speed} illustrates the
performance of D-Cliques on MNIST with $n=100$ nodes. Observe that the
convergence speed is
very close
to that of a fully-connected topology, and significantly better than with
a ring or a grid (see Figure~\ref{fig:iid-vs-non-iid-problem}). With
100 nodes, it offers a reduction of $\approx90\%$ in the number of edges
compared to a fully-connected topology. Nonetheless, there is still
significant variance in the accuracy across nodes, which is due to the bias
introduced by inter-clique edges. We address this issue in the next section.
\subsection{Importance of Clique Averaging and Momentum}
% To regenerate figure, from results/mnist:
% python ../../../../Software/non-iid-topology-simulator/tools/plot_convergence.py fully-connected/all/2021-03-10-09:25:19-CET no-init-no-clique-avg/fully-connected-cliques/all/2021-03-12-11:12:49-CET no-init/fully-connected-cliques/all/2021-03-12-11:12:01-CET --add-min-max --yaxis test-accuracy --labels '100 nodes non-IID fully-connected' '100 nodes non-IID d-cliques w/o clique avg.' '100 nodes d-cliques non-IID w/ clique avg.' --legend 'lower right' --ymin 89 --ymax 92.5 --font-size 13 --save-figure ../../figures/d-clique-mnist-clique-avg.png --linestyles 'solid' 'dashed' 'dotted'
\begin{figure}[t]
\centering
\includegraphics[width=0.35\textwidth]{../figures/d-clique-mnist-clique-avg}
\caption{\label{fig:d-clique-mnist-clique-avg} Effect of Clique Averaging on MNIST. Y-axis starts at 89.}
\end{figure}
As illustrated in Figure~\ref{fig:d-clique-mnist-clique-avg}, Clique Averaging
significantly reduces the variance of models across nodes and accelerates
convergence to reach the same level as the one obtained with a
fully-connected topology. Note that Clique Averaging induces a small
additional cost, as gradients
and models need to be sent in two separate rounds of messages. Nonetheless, compared to fully connecting all nodes, the total number of messages is reduced by $\approx 80\%$.
\begin{figure}[t]
\centering
% To regenerate figure, from results/cifar10
% python ../../../../Software/non-iid-topology-simulator/tools/plot_convergence.py 1-node-iid/all/2021-03-10-13:52:58-CET no-init-no-clique-avg/fully-connected-cliques/all/2021-03-13-18:34:35-CET no-init-no-clique-avg-no-momentum/fully-connected-cliques/all/2021-03-26-13:47:35-CET/ --legend 'upper right' --add-min-max --labels '1-node IID w/ momentum' '100 nodes non-IID d-cliques w/ momentum' '100 nodes non-IID d-cliques w/o momentum' --font-size 14 --yaxis test-accuracy --save-figure ../../figures/d-cliques-cifar10-momentum-non-iid-effect.png --ymax 100 --linestyles 'solid' 'dashed' 'dotted'
\begin{subfigure}[b]{0.35\textwidth}
\centering
\includegraphics[width=\textwidth]{../figures/d-cliques-cifar10-momentum-non-iid-effect}
\caption{\label{fig:d-cliques-cifar10-momentum-non-iid-effect} Without Clique Averaging }
\end{subfigure}
\hfill
% To regenerate figure, from results/cifar10
% python ../../../../Software/non-iid-topology-simulator/tools/plot_convergence.py 1-node-iid/all/2021-03-10-13:52:58-CET no-init/fully-connected-cliques/all/2021-03-13-18:32:55-CET --legend 'upper right' --add-min-max --labels '1-node IID w/ momentum' '100 nodes non-IID d-clique w/ momentum' --font-size 14 --yaxis test-accuracy --save-figure ../../figures/d-cliques-cifar10-momentum-non-iid-clique-avg-effect.png --ymax 100 --linestyles 'solid' 'dashed' 'dotted'
\begin{subfigure}[b]{0.35\textwidth}
\centering
\includegraphics[width=\textwidth]{../figures/d-cliques-cifar10-momentum-non-iid-clique-avg-effect}
\caption{\label{fig:d-cliques-cifar10-momentum-non-iid-clique-avg-effect} With Clique Averaging}
\end{subfigure}
\caption{\label{fig:cifar10-momentum} Non-IID Effect of Momentum on CIFAR10 with LeNet}
\end{figure}
As shown in
Figure~\ref{fig:d-cliques-cifar10-momentum-non-iid-clique-avg-effect},
the use of Clique Averaging restores the benefits of momentum and closes the gap
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
with the centralized setting.
\subsection{Comparing D-Cliques to Other Sparse Topologies}
We demonstrate the advantages of D-Cliques over alternative sparse topologies
that have a similar number of edges. First, we consider topologies in which
the neighbors of each node are selected at random (hence without any clique
structure).
Specifically, for $n=100$ nodes, we
construct a random topology such that each node has exactly 10 edges, which is
similar to the average 9.9 edges of our D-Cliques topology
(Figure~\ref{fig:d-cliques-figure}). To better understand the role of
the clique structure beyond merely ensuring class representativity among
neighbors,
we also compare to a random topology similar to the one described above except
that edges are
chosen such that each node has neighbors of all possible classes. Finally, we
also implement an analog of Clique Averaging for these random topologies,
where all nodes de-bias their gradient based on the class distribution of
their neighbors. In the latter case, since nodes do not form a clique, each
node obtains a different average gradient.
The results for MNIST and CIFAR10 are shown in
Figure~\ref{fig:d-cliques-comparison-to-non-clustered-topologies}. For MNIST,
a purely random topology has higher variance and lower convergence speed than
D-Cliques (with or without Clique Averaging), while a random topology with
class representativity performs similarly as D-Cliques without Clique
Averaging. However and perhaps surprisingly, a random topology with unbiased
gradient performs slightly worse than without it. In any case, D-Cliques with
Clique Averaging outperforms all random topologies, showing that the clique
structure has a small but noticeable effect on the average accuracy and
significantly reduces the variance across nodes in this setup.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.35\textwidth}
% To regenerate the figure, from directory results/mnist
% python ../../../../Software/non-iid-topology-simulator/tools/plot_convergence.py fully-connected-cliques/all/2021-03-10-10:19:44-CET no-init-no-clique-avg/fully-connected-cliques/all/2021-03-12-11:12:49-CET random-10/all/2021-07-23-11:59:56-CEST random-10-diverse/all/2021-03-17-20:28:35-CET --labels 'd-clique (fcc)' 'd-clique (fcc) no clique avg.' '10 random edges' '10 random edges (all classes represented)' --add-min-max --legend 'lower right' --ymin 80 --ymax 92.5 --yaxis test-accuracy --save-figure ../../figures/d-cliques-mnist-linear-comparison-to-non-clustered-topologies.png --font-size 13 --linestyles 'solid' 'dashed' 'dotted' 'dashdot'
\centering
\includegraphics[width=\textwidth]{../figures/d-cliques-mnist-linear-comparison-to-non-clustered-topologies}
\caption{MNIST with Linear Model}
\end{subfigure}
\hfill
% To regenerate the figure, from directory results/cifar10
% python ../../../../Software/non-iid-topology-simulator/tools/plot_convergence.py no-init/fully-connected-cliques/all/2021-03-13-18:32:55-CET no-init-no-clique-avg/fully-connected-cliques/all/2021-03-13-18:34:35-CET random-10/all/2021-07-23-14:33:48-CEST random-10-diverse/all/2021-03-17-20:30:41-CET random-10-diverse-unbiased-gradient/all/2021-03-17-20:31:14-CET --labels 'd-clique (fcc) clique avg.' 'd-clique (fcc) no clique avg.' '10 random edges' '10 random edges (all classes repr.)' '10 random (all classes repr.) with unbiased grad.' --add-min-max --legend 'upper left' --yaxis test-accuracy --save-figure ../../figures/d-cliques-cifar10-linear-comparison-to-non-clustered-topologies.png --ymax 119 --font-size 13 --linestyles 'solid' 'dashed' 'dotted' 'dashdot' 'solid' --markers '' '' '' '' 'o'
\begin{subfigure}[b]{0.35\textwidth}
\centering
\includegraphics[width=\textwidth]{../figures/d-cliques-cifar10-linear-comparison-to-non-clustered-topologies}
\caption{CIFAR10 with LeNet}
\end{subfigure}
\caption{\label{fig:d-cliques-comparison-to-non-clustered-topologies} Comparison to Non-Clustered Topologies}
\end{figure}
On the harder CIFAR10 dataset with a deep convolutional network, the
differences are much more dramatic:
D-Cliques with Clique Averaging and momentum turns out to be critical for fast
convergence.
Crucially, all random topologies fail to converge to a good solution. This
confirms that our clique structure is important to reduce variance
across nodes and improve the convergence. The difference with the previous
experiment seems to be due to both the use of a higher capacity model and to
the intrinsic characteristics of the datasets.
While the previous experiments suggest that our clique structure is
instrumental in obtaining good performance, one may wonder whether
intra-clique full connectivity is actually necessary.
Figure~\ref{fig:d-cliques-intra-connectivity} shows the convergence speed of
a D-Cliques topology where cliques have been sparsified by randomly
removing 1 or 5 undirected edges per clique (out of 45). Strikingly, both for MNIST and
CIFAR10, removing just a single edge from the cliques has a
significant effect on the
convergence speed. On CIFAR10, it even entirely negates the
benefits of D-Cliques.
Overall, these results show that achieving fast convergence on non-IID
data with sparse topologies requires a very careful design, as we have
proposed with D-Cliques.
\begin{figure*}[t]
\centering
\begin{subfigure}[htbp]{0.4\textwidth}
\centering
% To regenerate the figure, from directory results/mnist
% python ../../../../Software/non-iid-topology-simulator/tools/plot_convergence.py no-init/fully-connected-cliques/all/2021-03-12-11:12:01-CET rm-1-edge/all/2021-03-18-17:28:27-CET rm-1-edge-unbiased-grad/all/2021-03-18-17:28:47-CET --add-min-max --ymin 85 --ymax 92.5 --legend 'lower right' --yaxis test-accuracy --labels 'fcc, clique grad.' 'fcc -1 edge/clique, no clique grad.' 'fcc -1 edge/clique, clique grad.' --save-figure ../../figures/d-cliques-mnist-clique-clustering-fcc-minus-1-edge.png --font-size 13 --linestyle 'solid' 'dashed' 'dotted'
\includegraphics[width=\textwidth]{../figures/d-cliques-mnist-clique-clustering-fcc-minus-1-edge}
\caption{\label{fig:d-cliques-mnist-clique-clustering-minus-1-edge} MNIST (-1 edge/clique)}
\end{subfigure}
\hfill
\begin{subfigure}[htbp]{0.4\textwidth}
\centering
% To regenerate the figure, from directory results/cifar10
% python ../../../../Software/non-iid-topology-simulator/tools/plot_convergence.py no-init/fully-connected-cliques/all/2021-03-13-18:32:55-CET rm-1-edge/all/2021-03-18-17:29:58-CET rm-1-edge-unbiased-grad/all/2021-03-18-17:30:17-CET --add-min-max --ymax 80 --legend 'upper left' --yaxis test-accuracy --labels 'fcc, clique grad.' 'fcc -1 edge/clique, no clique grad.' 'fcc -1 edge/clique, clique grad.' --save-figure ../../figures/d-cliques-cifar10-clique-clustering-fcc-minus-1-edge.png --font-size 13 --linestyle 'solid' 'dashed' 'dotted'
\includegraphics[width=\textwidth]{../figures/d-cliques-cifar10-clique-clustering-fcc-minus-1-edge}
\caption{\label{fig:d-cliques-cifar10-clique-clustering-minus-1-edge} CIFAR10 (-1 edge/clique)}
\end{subfigure}
%\begin{subfigure}[htbp]{0.35\textwidth}
% \centering
%% To regenerate the figure, from directory results/mnist
%% python ../../../../Software/non-iid-topology-simulator/tools/plot_convergence.py no-init/fully-connected-cliques/all/2021-03-12-11:12:01-CET rm-5-edges/all/2021-03-18-17:29:10-CET rm-5-edges-unbiased-grad/all/2021-03-18-17:29:36-CET --add-min-max --ymin 85 --ymax 92.5 --legend 'lower right' --yaxis test-accuracy --labels 'fcc, clique grad.' 'fcc -5 edges/clique, no clique grad.' 'fcc -5 edges/clique, clique grad.' --save-figure ../../figures/d-cliques-mnist-clique-clustering-fcc-minus-5-edges.png --font-size 13 --linestyle 'solid' 'dashed' 'dotted'
% \includegraphics[width=\textwidth]{../figures/d-cliques-mnist-clique-clustering-fcc-minus-5-edges}
%\caption{\label{fig:d-cliques-mnist-clique-clustering-minus-5-edges} MNIST (-5 edges/clique)}
%\end{subfigure}
%\hfill
%\begin{subfigure}[htbp]{0.35\textwidth}
% \centering
%% To regenerate the figure, from directory results/cifar10
%% python ../../../../Software/non-iid-topology-simulator/tools/plot_convergence.py no-init/fully-connected-cliques/all/2021-03-13-18:32:55-CET rm-5-edges/all/2021-03-18-17:30:38-CET rm-5-edges-unbiased-grad/all/2021-03-18-17:31:04-CET --add-min-max --ymax 80 --legend 'upper left' --yaxis test-accuracy --labels 'fcc, clique grad.' 'fcc -5 edges/clique, no clique grad.' 'fcc -5 edges/clique, clique grad.' --save-figure ../../figures/d-cliques-cifar10-clique-clustering-fcc-minus-5-edges.png --font-size 13 --linestyle 'solid' 'dashed' 'dotted'
% \includegraphics[width=\textwidth]{../figures/d-cliques-cifar10-clique-clustering-fcc-minus-5-edges}
%\caption{\label{fig:d-cliques-cifar10-clique-clustering-minus-5-edges} CIFAR10 (-5 edges/clique)}
%\end{subfigure}
\caption{\label{fig:d-cliques-intra-connectivity} Importance of Intra-Clique Full-Connectivity}
\end{figure*}
\subsection{Scaling up D-Cliques with Sparser Inter-Clique Topologies}
\label{section:interclique-topologies}
So far, we have used a fully-connected inter-clique topology for D-Cliques,
which has the advantage of bounding the
\textit{path length}\footnote{The \textit{path length} is the number of edges on the path with the shortest number of edges between two nodes.} to $3$ between any pair of nodes. This choice requires $
\frac{n}{c}(\frac{n}{c} - 1)$ inter-clique edges, which scales quadratically
in the number of nodes $n$ for a given clique size $c$\footnote{We consider \textit{directed} edges in the analysis: the number of undirected edges is half and does not affect asymptotic behavior.}. This can become significant at larger scales when $n$ is
large compared to $c$.
In this last series of experiments, we evaluate the effect of choosing sparser
inter-clique topologies on the convergence speed for a larger network of 1000
nodes. We compare the scalability and convergence speed of several
D-Cliques variants, which all use $O(nc)$ edges
to create cliques as a starting point.
We first measure the convergence speed of inter-cliques topologies whose number of edges scales linearly with the number of nodes. Among those, the \textit{ring} has the (almost) fewest possible number of edges: it
uses $\frac{2n}{c}$ inter-clique edges but its average path length between nodes
also scales linearly.
We also consider another topology, which we call \textit{fractal}, that provides a
logarithmic
bound on the average path length. In this hierarchical scheme,
cliques are assembled in larger groups of $c$ cliques that are connected internally with one edge per
pair of cliques, but with only one edge between pairs of larger groups. The
topology is built recursively such that $c$ groups will themselves form a
larger group at the next level up. This results in at most $c$ edges per node
if edges are evenly distributed: i.e., each group within the same level adds
at most $c-1$ edges to other groups, leaving one node per group with $c-1$
edges that can receive an additional edge to connect with other groups at the next level.
Since nodes have at most $c$ edges, $n$ nodes have at most $nc$ edges, therefore
the number of edges in this fractal scheme indeed scales linearly in the number of nodes.
Second, we look at another scheme
in which the number of edges scales in a near, but not quite, linear fashion.
We propose to connect cliques according to a
small-world-like topology~\cite{watts2000small} applied on top of a
ring~\cite{stoica2003chord}. In this scheme, cliques are first arranged in a
ring. Then each clique adds symmetric edges, both clockwise and
counter-clockwise on the ring, with the $m$ closest cliques in sets of
cliques that are exponentially bigger the further they are on the ring (see
Algorithm~\ref{Algorithm:Smallworld} in the appendix for
details on the construction). This ensures a good connectivity with other
cliques that are close on the ring, while still keeping the average
path length small. This scheme uses $\frac{n}{c}*2(m)\log(\frac{n}{c})$ inter-clique edges and
therefore grows in the order of $O(n\log(n))$ with the number of nodes.
Figure~\ref{fig:d-cliques-cifar10-convolutional} shows the convergence
speed of all the above schemes on MNIST and CIFAR10, compared to the ideal
baseline
of a
single IID node performing the same number of updates per epoch (representing
the fastest convergence speed achievable if topology had no impact). Among the linear schemes, the ring
topology converges but is much slower than our fractal scheme. Among the super-linear schemes, the small-world
topology has a convergence speed that is almost the same as with a
fully-connected inter-clique topology but with 22\% less edges
(14.5 edges on average instead of 18.9).
While the small-world inter-clique topology shows promising scaling behaviour, the
fully-connected topology still offers
significant benefits with 1000 nodes, as it represents a 98\% reduction in the
number of edges compared to fully connecting individual nodes (18.9 edges on
average instead of 999) and a 96\% reduction in the number of messages (37.8
messages per round per node on average instead of 999). We refer to
Appendix~\ref{app:scaling} for additional results comparing the convergence
speed across different number of nodes. Overall, these results
show that D-Cliques can nicely scale with the number of nodes.
\begin{figure*}[t]
\centering
% To regenerate the figure, from directory results/mnist
% python ../../../../Software/non-iid-topology-simulator/tools/plot_convergence.py 1-node-iid/all/2021-03-10-09:20:03-CET ../scaling/1000/mnist/fractal-cliques/all/2021-03-14-17:41:59-CET ../scaling/1000/mnist/clique-ring/all/2021-03-13-18:22:36-CET --add-min-max --yaxis test-accuracy --legend 'lower right' --ymin 84 --ymax 92.5 --labels '1 node IID' 'd-cliques (fractal)' 'd-cliques (ring)' --save-figure ../../figures/d-cliques-mnist-1000-nodes-comparison-linear.png --font-size 13 --linestyles 'solid' 'dashed' 'dotted'
\begin{subfigure}[b]{0.35\textwidth}
\centering
\includegraphics[width=\textwidth]{../figures/d-cliques-mnist-1000-nodes-comparison-linear}
\caption{\label{fig:d-cliques-mnist-1000-nodes-comparison-linear} MNIST with Linear Model: Linear Inter-clique Topologies.}
\end{subfigure}
\hfill
% To regenerate the figure, from directory results/cifar10
% python ../../../../Software/non-iid-topology-simulator/tools/plot_convergence.py 1-node-iid/all/2021-03-10-13:52:58-CET ../scaling/1000/cifar10/fractal-cliques/all/2021-03-14-17:42:46-CET ../scaling/1000/cifar10/clique-ring/all/2021-03-14-09:55:24-CET --add-min-max --yaxis test-accuracy --labels '1-node IID' 'd-cliques (fractal)' 'd-cliques (ring)' --legend 'lower right' --save-figure ../../figures/d-cliques-cifar10-1000-vs-1-node-test-accuracy-linear.png --font-size 13 --linestyles 'solid' 'dashed' 'dotted'
\begin{subfigure}[b]{0.35\textwidth}
\centering
\includegraphics[width=\textwidth]{../figures/d-cliques-cifar10-1000-vs-1-node-test-accuracy-linear}
\caption{\label{fig:d-cliques-cifar10-1000-vs-1-node-test-accuracy-linear} CIFAR10 with LeNet Model: Linear Inter-clique Topologies.}
\end{subfigure}
% To regenerate the figure, from directory results/mnist
% python ../../../../Software/non-iid-topology-simulator/tools/plot_convergence.py 1-node-iid/all/2021-03-10-09:20:03-CET ../scaling/1000/mnist/fully-connected-cliques/all/2021-03-14-17:56:26-CET ../scaling/1000/mnist/smallworld-logn-cliques/all/2021-03-23-21:45:39-CET --add-min-max --yaxis test-accuracy --legend 'lower right' --ymin 84 --ymax 92.5 --labels '1 node IID' 'd-cliques (fully-connected cliques)' 'd-cliques (smallworld)' --save-figure ../../figures/d-cliques-mnist-1000-nodes-comparison-super-linear.png --font-size 13 --linestyles 'solid' 'dashed' 'dotted'
\begin{subfigure}[b]{0.35\textwidth}
\centering
\includegraphics[width=\textwidth]{../figures/d-cliques-mnist-1000-nodes-comparison-super-linear}
\caption{\label{fig:d-cliques-mnist-1000-nodes-comparison-super-linear} MNIST with Linear Model: Superlinear Inter-clique Topologies.}
\end{subfigure}
\hfill
% To regenerate the figure, from directory results/cifar10
% python ../../../../Software/non-iid-topology-simulator/tools/plot_convergence.py 1-node-iid/all/2021-03-10-13:52:58-CET ../scaling/1000/cifar10/fully-connected-cliques/all/2021-03-14-17:41:20-CET ../scaling/1000/cifar10/smallworld-logn-cliques/all/2021-03-23-22:13:57-CET --add-min-max --yaxis test-accuracy --labels '1-node IID' 'd-cliques (fully-connected cliques)' 'd-cliques (smallworld)' --legend 'lower right' --save-figure ../../figures/d-cliques-cifar10-1000-vs-1-node-test-accuracy-super-linear.png --font-size 13 --linestyles 'solid' 'dashed' 'dotted'
\begin{subfigure}[b]{0.35\textwidth}
\centering
\includegraphics[width=\textwidth]{../figures/d-cliques-cifar10-1000-vs-1-node-test-accuracy-super-linear}
\caption{\label{fig:d-cliques-cifar10-1000-vs-1-node-test-accuracy-super-linear} CIFAR10 with LeNet Model: Superlinear Inter-clique Topologies.}
\end{subfigure}
\caption{\label{fig:d-cliques-cifar10-convolutional} D-Cliques Convergence Speed with 1000 nodes, non-IID, Constant Updates per Epoch, with Different Inter-Clique Topologies.}
\end{figure*}
\subsection{Cost of Constructing Cliques}
\label{section:cost-cliques}
\dots \todo{EL: Add plots showing convergence speed in terms of skew vs iteration number, as well as absolute computation time}
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
\section{Related Work}
\label{section:related-work}
In this section, we review some related work on dealing with non-IID data in
federated learning, and on the role of topology in fully decentralized
algorithms.
\paragraph{Dealing with non-IID data in server-based FL.}
Non-IID data is not much of an issue in server-based FL if
clients send their parameters to the server after each gradient update.
Problems arise when one seeks to reduce
the number of communication rounds by allowing each participant to perform
multiple local updates, as in the popular FedAvg algorithm
\cite{mcmahan2016communication}. Indeed, non-IID data can prevent
such algorithms from
converging to a good solution \cite{quagmire,scaffold}. This led to the design
of algorithms that are specifically designed to mitigate the impact
of non-IID data while performing
multiple local updates, using adaptive client sampling \cite{quagmire}, update
corrections \cite{scaffold} or regularization in the local objective
\cite{fedprox}. Another direction is to embrace the non-IID scenario by
learning personalized models for each client
\cite{smith2017federated,perso_fl_mean,maml,moreau}.
We note that recent work explores rings of server-based topologies
\cite{tornado}, but the focus is not on dealing with non-IID data but
to make server-based FL more scalable to a large number of clients.
\paragraph{Dealing with non-IID data in fully decentralized FL.}
Non-IID data is known to negatively impact the convergence speed
of fully decentralized FL algorithms in practice \cite{jelasity}. Aside from approaches that aim to learn personalized models \cite{Vanhaesebrouck2017a,Zantedeschi2020a}, this
motivated the design of algorithms with modified updates based on variance
reduction \cite{tang18a}, momentum correction \cite{momentum_noniid},
cross-gradient
aggregation \cite{cross_gradient}, or multiple averaging steps
between updates (see \cite{consensus_distance} and references therein). These
algorithms
typically require significantly more communication and/or computation, and
have only been evaluated on small-scale networks with a few tens of
nodes.\footnote{We
also observed that \cite{tang18a} is subject to numerical
instabilities when run on topologies other than rings. When
the rows and columns of $W$ do not exactly
sum to $1$ (due to finite precision), these small differences get amplified by
the proposed updates and make the algorithm diverge.}
In contrast, D-Cliques focuses on the design of a sparse topology which is
able to compensate for the effect of non-IID data and scales to large
networks. We do not modify the simple
and efficient D-SGD
algorithm \cite{lian2017d-psgd} beyond removing some neighbor
contributions
that otherwise bias the gradient direction.
\paragraph{Impact of topology in fully decentralized FL.} It is well
known
that the choice of network topology can affect the
convergence of fully decentralized algorithms. In theoretical convergence
rates, this is typically accounted
for by a dependence on the spectral gap of
the network, see for instance
\cite{Duchi2012a,Colin2016a,lian2017d-psgd,Nedic18}.
However, for IID data, practice contradicts these classic
results as fully decentralized algorithms have been observed to converge
essentially as fast
on sparse topologies like rings or grids as they do on a fully connected