\documentclass[runningheads]{llncs} % \usepackage[utf8]{inputenc} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{graphicx} \usepackage{xcolor} \usepackage{soul} \usepackage{hyperref} \usepackage{algorithm} \usepackage[noend]{algpseudocode} \usepackage{dsfont} \usepackage{caption} \usepackage{subcaption} \input{macros} % Used for displaying a sample figure. If possible, figure files should % be included in EPS format. % % If you use the hyperref package, please uncomment the following line % to display URLs in blue roman font according to Springer's eBook style: % \renewcommand\UrlFont{\color{blue}\rmfamily} \begin{document} \title{D-Cliques: Compensating NonIIDness in Decentralized Federated Learning with Topology} % \titlerunning{D-Cliques} % If the paper title is too long for the running head, you can set % an abbreviated paper title here % \author{Aur\'elien Bellet\inst{1} \and Anne-Marie Kermarrec\inst{2} \and Erick Lavoie\inst{2}\thanks{Authors in alphabetical order of last names.}} %% \authorrunning{A. Bellet, A-M. Kermarrec, E. Lavoie} %% First names are abbreviated in the running head. %% If there are more than two authors, 'et al.' is used. %% \institute{Inria, Lille, France\\ \email{aurelien.bellet@inria.fr} \and EPFL, Lausanne, Switzerland \\ \email{\{anne-marie.kermarrec,erick.lavoie\}@epfl.ch}\\ } % \maketitle % typeset the header of the contribution % \begin{abstract} The convergence speed of machine learning models trained with Federated Learning is significantly affected by non-independent and identically distributed (non-IID) data partitions, even more so in a fully decentralized setting without a central server. In this paper, we show that the impact of \textit{local class bias}, an important type of data non-IIDness, can be significantly reduced by carefully designing the underlying communication topology. We present D-Cliques, a novel topology that reduces gradient bias by grouping nodes in interconnected cliques such that the local joint distribution in a clique is representative of the global class distribution. We also show how to adapt the updates of decentralized SGD to obtain unbiased gradients and implement an effective momentum with D-Cliques. Our empirical evaluation on MNIST and CIFAR10 demonstrates that our approach provides similar convergence speed as a fully-connected topology with a significant reduction in the number of edges and messages. In a 1000-node topology, D-Cliques requires 98\% less edges and 96\% less total messages, with further possible gains using a small-world topology across cliques. \keywords{Decentralized Learning \and Federated Learning \and Topology \and Non-IID Data \and Stochastic Gradient Descent} \end{abstract} % % % \section{Introduction} Machine learning is currently shifting from a \emph{centralized} paradigm, in which models are trained on data located on a single machine or in a data center, to \emph{decentralized} ones. Effectively, the latter paradigm closely matches the natural data distribution in the numerous use-cases where data is collected and processed by several independent parties (hospitals, companies, personal devices...). Federated Learning (FL) allows a set of participants to collaboratively train machine learning models on their joint data while keeping it where it has been produced. Not only does this avoid the costs of moving data, but it also mitigates privacy and confidentiality concerns~\cite{kairouz2019advances}. Yet, working with natural data distributions introduces new challenges for learning systems, as local datasets reflect the usage and production patterns specific to each participant: they are \emph{not} independent and identically distributed (non-IID). More specifically, the relative frequency of different classes of examples may significantly vary across local datasets \cite{kairouz2019advances,quagmire}. Therefore, one of the key challenges in FL is to design algorithms that can efficiently deal with such non-IID data distributions \cite{kairouz2019advances,fedprox,scaffold,quagmire}. Federated learning algorithms can be classified into two categories depending on the underlying network topology they run on. In server-based FL, the network is organized according to a star topology: a central server orchestrates the training process by iteratively aggregating model updates received from the participants (\emph{clients}) and sending back the aggregated model \cite{mcmahan2016communication}. In contrast, fully decentralized FL algorithms operate over an arbitrary network topology where participants communicate only with their direct neighbors in the network. A classic example of such algorithms is Decentralized SGD (D-SGD) \cite{lian2017d-psgd}, in which participants alternate between local SGD updates and model averaging with neighboring nodes. In this paper, we focus on fully decentralized algorithms as they can generally scale better to the large number of participants seen in ``cross-device'' applications \cite{kairouz2019advances}. Effectively, while a central server may quickly become a bottleneck as the number of participants increases, the topology used in fully decentralized algorithms can remain sparse enough such that all participants need only to communicate with a small number of other participants, i.e. nodes have small (constant or logarithmic) degree \cite{lian2017d-psgd}. For IID data, recent work has shown both empirically \cite{lian2017d-psgd,Lian2018} and theoretically \cite{neglia2020} that sparse topologies like rings or grids do not significantly affect the convergence speed compared to using denser topologies. In contrast to the IID case however, our experiments demonstrate that \emph{the impact of topology is extremely significant for non-IID data}. This phenomenon is illustrated in Figure~\ref{fig:iid-vs-non-iid-problem}: We observe that a ring or a grid topology clearly jeopardizes the convergence speed as local distributions do not have relative frequency of classes similar to the global distribution, i.e. they exhibit \textit{local class bias}. We stress the fact that, unlike in centralized FL \cite{kairouz2019advances,scaffold,quagmire}, this happens even when nodes perform a single local update before averaging the model with their neighbors. In this paper, we address the following question: \textit{Can we design sparse topologies with convergence speed similar to the one obtained in a fully connected network under a large number of participants with local class bias?} \begin{figure}[t] \centering % From directory results/mnist % python ../../../../Software/non-iid-topology-simulator/tools/plot_convergence.py ring/iid/all/2021-03-30-16:07:06-CEST ring/non-iid/all/2021-03-30-16:07:03-CEST --add-min-max --legend 'lower right' --yaxis test-accuracy --labels '100 nodes IID' '100 nodes non-IID' --save-figure ../../figures/ring-IID-vs-non-IID.png --font-size 20 --linestyles 'solid' 'dashed' \begin{subfigure}[b]{0.31\textwidth} \centering \includegraphics[width=\textwidth]{figures/ring-IID-vs-non-IID} \caption{\label{fig:ring-IID-vs-non-IID} Ring} \end{subfigure} \quad % From directory results/mnist % python ../../../../Software/non-iid-topology-simulator/tools/plot_convergence.py grid/iid/all/2021-03-30-16:07:01-CEST grid/non-iid/all/2021-03-30-16:06:59-CEST --add-min-max --legend 'lower right' --yaxis test-accuracy --labels '100 nodes IID' '100 nodes non-IID' --save-figure ../../figures/grid-IID-vs-non-IID.png --font-size 20 --linestyles 'solid' 'dashed' \begin{subfigure}[b]{0.31\textwidth} \centering \includegraphics[width=\textwidth]{figures/grid-IID-vs-non-IID} \caption{\label{fig:grid-IID-vs-non-IID} Grid} \end{subfigure} \quad % From directory results/mnist % python ../../../../Software/non-iid-topology-simulator/tools/plot_convergence.py fully-connected/iid/all/2021-03-30-16:07:20-CEST fully-connected/all/2021-03-10-09:25:19-CET --add-min-max --legend 'lower right' --yaxis test-accuracy --labels '100 nodes IID' '100 nodes non-IID' --save-figure ../../figures/fully-connected-IID-vs-non-IID.png --font-size 20 --linestyles 'solid' 'dashed' \begin{subfigure}[b]{0.31\textwidth} \centering \includegraphics[width=\textwidth]{figures/fully-connected-IID-vs-non-IID} \caption{\label{fig:fully-connected-IID-vs-non-IID} Fully-connected} \end{subfigure} \caption{IID vs non-IID convergence speed of decentralized SGD for logistic regression on MNIST for different topologies. Bold lines show the average test accuracy across nodes while thin lines show the minimum and maximum accuracy of individual nodes. While the effect of topology is negligible for IID data, it is very significant in the non-IID case. When fully-connected, both cases converge similarly. See Section~\ref{section:experimental-settings} for details on the experimental setup.} \label{fig:iid-vs-non-iid-problem} \end{figure} Specifically, we make the following contributions: (1) We propose D-Cliques, a sparse topology in which nodes are organized in interconnected cliques, i.e. locally fully-connected sets of nodes, such that the joint data distribution of each clique is representative of the global (IID) distribution; (2) We propose Clique Averaging, a modified version of the standard D-SGD algorithm which decouples gradient averaging, used for optimizing local models, from distributed averaging, used to ensure all models converge, therefore reducing the bias introduced by inter-clique connections; (3) We show how Clique Averaging can be used to implement unbiased momentum that would otherwise be detrimental in the non-IID setting; (4) We demonstrate through an extensive experimental study that our approach removes the effect of the local class bias on the MNIST~\cite{mnistWebsite} and CIFAR10~ \cite{krizhevsky2009learning} datasets, for training a linear model and a deep convolutional network; (5) Finally, we demonstrate the scalability of our approach by considering up to 1000-node networks, in contrast to most previous work on fully decentralized learning that considers only a few tens of nodes \cite{tang18a,neglia2020,momentum_noniid,cross_gradient,consensus_distance}. For instance, our results show that using D-Cliques in a 1000-node network requires 98\% less edges ($18.9$ vs $999$ edges per participant on average), thereby yielding a 96\% reduction in the total number of required messages (37.8 messages per round per node on average instead of 999), to obtain a similar convergence speed as a fully-connected topology. Furthermore an additional 22\% improvement % (14.5 edges per node on average instead of 18.9) is possible when using a small-world inter-clique topology, with further potential gains at larger scales because of its quasilinear scaling ($O(n \log(n))$) in $n$, the number of nodes. The rest of this paper is organized as follows. We first present the problem statement and our methodology (Section~\ref{section:problem}). The D-Cliques design is presented in Section~\ref{section:d-cliques}) along with an empirical illustration of its benefits. In Section~\ref{section:clique-averaging-momentum}, we show how to further reduce bias with Clique Averaging and how to use it to implement momentum. We present the results of our extensive experimental study in Section~\ref{section:non-clustered}. We review some related work in Section~\ref{section:related-work}, and conclude with promising directions for future work in Section~\ref{section:conclusion}. \section{Problem Statement} \label{section:problem} We consider a set $N = \{1, \dots, n \}$ of $n$ nodes seeking to collaboratively solve a classification task with $c$ classes. Each node has access to a local dataset that follows its own local distribution $D_i$. The goal is to find a global model $x$ that performs well on the union of the local distributions by minimizing the average training loss: \begin{equation} \min_{x} \frac{1}{n}\sum_{i=1}^{n} \mathds{E}_ {s_i \sim D_i} [F_i(x;s_i)], \label{eq:dist-optimization-problem} \end{equation} where $s_i$ is a data example drawn from $D_i$ and $F_i$ is the loss function on node $i$. Therefore, $\mathds{E}_{s_i \sim D_i} F_i(x;s_i)$ denotes the expected loss of model $x$ on a random example $s_i$ drawn from $D_i$. To collaboratively solve Problem \eqref{eq:dist-optimization-problem}, each node can exchange messages with its neighbors in an undirected network graph $G(N,E)$ where $\{i,j\}\in E$ denotes an edge (communication channel) between nodes $i$ and $j$. \subsection{Training Algorithm} In this work, we use the popular Decentralized Stochastic Gradient Descent algorithm, aka D-SGD~\cite{lian2017d-psgd}. As shown in Algorithm~\ref{Algorithm:D-PSGD}, a single iteration of D-SGD at node $i$ consists of sampling a mini-batch from its local distribution $D_i$, updating its local model $x_i$ by taking a stochastic gradient descent (SGD) step according to the mini-batch, and performing a weighted average of its local model with those of its neighbors. This weighted average is defined by a mixing matrix $W$, in which $W_{ij}$ corresponds to the weight of the outgoing connection from node $i$ to $j$ and $W_{ij} = 0$ for $ \{i,j\}\notin E$. To ensure that the local models converge on average to a stationary point of Problem \eqref{eq:dist-optimization-problem}, $W$ must be doubly stochastic ($\sum_{j \in N} W_{ij} = 1$ and $\sum_{j \in N} W_{ji} = 1$) and symmetric, i.e. $W_{ij} = W_{ji}$~\cite{lian2017d-psgd}. \begin{algorithm}[t] \caption{D-SGD, Node $i$} \label{Algorithm:D-PSGD} \begin{algorithmic}[1] \State \textbf{Require:} initial model parameters $x_i^{(0)}$, learning rate $\gamma$, mixing weights $W$, mini-batch size $m$, number of steps $K$ \For{$k = 1,\ldots, K$} \State $s_i^{(k)} \gets \text{mini-batch sample of size $m$ drawn from~} D_i$ \State $x_i^{(k-\frac{1}{2})} \gets x_i^{(k-1)} - \gamma \nabla F(x_i^{(k-1)}; s_i^{(k)})$ \State $x_i^{(k)} \gets \sum_{j \in N} W_{ji}^{(k)} x_j^{(k-\frac{1}{2})}$ \EndFor \end{algorithmic} \end{algorithm} \subsection{Methodology} \subsubsection{Non-IID assumptions.} \label{section:non-iid-assumptions} As demonstrated in Figure~\ref{fig:iid-vs-non-iid-problem}, lifting the assumption of IID data significantly challenges the learning algorithm. In this paper, we focus on an \textit{extreme case of local class bias}: we consider that each node only has examples from a single class. To isolate the effect of local class bias from other potentially compounding factors, we make the following simplifying assumptions: (1) All classes are equally represented in the global dataset; (2) All classes are represented on the same number of nodes; (3) All nodes have the same number of examples. We believe that these assumptions are reasonable in the context of our study because: (1) Global class imbalance equally affects the optimization process on a single node and is therefore not specific to the decentralized setting; (2) Our results do not exploit specific positions in the topology; (3) Imbalanced dataset sizes across nodes can be addressed for instance by appropriately weighting the individual loss functions. Our results can be extended to support additional compounding factors in future work. \subsubsection{Experimental setup.} \label{section:experimental-settings} Our main goal is to provide a fair comparison of the convergence speed across different topologies and algorithmic variations, in order to show that our approach can remove much of the effect of local class bias. We experiment with two datasets: MNIST~\cite{mnistWebsite} and CIFAR10~\cite{krizhevsky2009learning}, which both have $c=10$ classes. For MNIST, we use 45k and 10k examples from the original 60k training set for training and validation respectively. The remaining 5k training examples were randomly removed to ensure all 10 classes are balanced while ensuring that the dataset is evenly divisible across 100 and 1000 nodes. We use all 10k examples of the test set to measure prediction accuracy. For CIFAR10, classes are evenly balanced: we use 45k/50k images of the original training set for training, 5k/50k for validation, and all 10k examples of the test set for measuring prediction accuracy. We use a logistic regression classifier for MNIST, which provides up to 92.5\% accuracy in the centralized setting. For CIFAR10, we use a Group-Normalized variant of LeNet~\cite{quagmire}, a deep convolutional network which achieves an accuracy of $72.3\%$ in the centralized setting. These models are thus reasonably accurate (which is sufficient to study the effect of the topology) while being sufficiently fast to train in a fully decentralized setting and simple enough to configure and analyze. Regarding hyper-parameters, we jointly optimize the learning rate and mini-batch size on the validation set for 100 nodes, obtaining respectively $0.1$ and $128$ for MNIST and $0.002$ and $20$ for CIFAR10. For CIFAR10, we additionally use a momentum of $0.9$. We evaluate 100- and 1000-node networks by creating multiple models in memory and simulating the exchange of messages between nodes. To ignore the impact of distributed execution strategies and system optimization techniques, we report the test accuracy of all nodes (min, max, average) as a function of the number of times each example of the dataset has been sampled by a node, i.e. an \textit{epoch}. This is equivalent to the classic case of a single node sampling the full distribution. To further make results comparable across different number of nodes, we lower the batch size proportionally to the number of nodes added, and inversely, e.g. on MNIST, 128 with 100 nodes vs. 13 with 1000 nodes. This ensures the same number of model updates and averaging per epoch, which is important to have a fair comparison.\footnote{Updating and averaging models after every example can eliminate the impact of local class bias. However, the resulting communication overhead is impractical.} Finally, we compare our results against an ideal baseline: either a fully-connected network topology with the same number of nodes or a single IID node. In both cases, the topology has no effect on the optimization. For a certain choice of number of nodes and mini-batch size, both approaches are equivalent. \section{D-Cliques: Creating Locally Representative Cliques} \label{section:d-cliques} In this section, we present the design of D-Cliques. To give an intuition of our approach, let us consider the neighborhood of a single node in a grid similar to that of Figure~\ref{fig:grid-IID-vs-non-IID}, represented on Figure~\ref{fig:grid-iid-vs-non-iid-neighbourhood}. The colors of a node represent the different classes present in its local dataset. In the IID setting (Figure~\ref{fig:grid-iid-neighbourhood}), each node has examples of all classes in equal proportions. In the non-IID setting (Figure~\ref{fig:grid-non-iid-neighbourhood}), each node has examples of only a single class and nodes are distributed randomly in the grid. A single training step, from the point of view of the center node, is equivalent to sampling a mini-batch five times larger from the union of the local distributions of all illustrated nodes. In the IID case, since gradients are computed from examples of all classes, the resulting averaged gradient points in a direction that tends to reduce the loss across all classes. In contrast, in the non-IID case, only a subset of classes are represented in the immediate neighborhood of the node, thus the gradients will be biased towards these classes. Importantly, as the distributed averaging algorithm takes several steps to converge, this variance persists across iterations as the locally computed gradients are far from the global average.\footnote{It is possible, but very costly, to mitigate this by performing a sufficiently large number of averaging steps between each gradient step.} This can significantly slow down convergence speed to the point of making decentralized optimization impractical. \begin{figure}[t] \centering \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=\textwidth]{figures/grid-iid-neighbourhood} \caption{\label{fig:grid-iid-neighbourhood} IID} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=\textwidth]{figures/grid-non-iid-neighbourhood} \caption{\label{fig:grid-non-iid-neighbourhood} Non-IID} \end{subfigure} \caption{Neighborhood in an IID and non-IID grid.} \label{fig:grid-iid-vs-non-iid-neighbourhood} \end{figure} In D-Cliques, we address the issues of non-iidness by carefully designing a network topology composed of \textit{cliques} and \textit{inter-clique connections}: \begin{itemize} \item D-Cliques recover a balanced representation of classes, similar to that of the IID case, by constructing a topology such that each node is part of a \textit{clique} with neighbors representing all classes. \item To ensure a global consensus and convergence, \textit{inter-clique connections} are introduced by connecting a small number of node pairs that are part of different cliques. \end{itemize} In the following, we introduce up to one inter-clique connection per node such that each clique has exactly one edge with all other cliques, see Figure~\ref{fig:d-cliques-figure} for the corresponding D-Cliques network in the case of $n=100$ nodes and $c=10$ classes. We will explore sparser inter-clique topologies in Section~\ref{section:interclique-topologies}. The mixing matrix $W$ required by D-SGD is obtained from standard Metropolis-Hasting weights~\cite{xiao2004fast} computed from the above topology, namely: \begin{equation} W_{ij} = \begin{cases} \frac{1}{\max(\text{degree}(i), \text{degree}(j)) + 1} & \text{if}~i \neq j \text{ and } \{i,j\}\in E,\\ 1 - \sum_{j \neq i} W_{ij} & \text{if}~$i = j$, \\ 0 & \text{otherwise}. \end{cases} \label{eq:metro} \end{equation} We refer to Algorithm~\ref{Algorithm:D-Clique-Construction} in the appendix for a formal account of D-Cliques construction. We note that it only requires the knowledge of the local class distribution at each node. For the sake of simplicity, we assume that D-Cliques is constructed from the global knowledge of these distributions, which can easily be obtained by decentralized averaging in a pre-processing step. The key idea of D-Cliques is that because the clique-level distribution $D_{ \textit{clique}} = \sum_{i \in \textit{clique}} D_i$ is representative of the global distribution, the local models of nodes across cliques remain rather close. Therefore, a sparse inter-clique topology can be used, significantly reducing the total number of edges without slowing down the convergence. Furthermore, the degree of each node in the network remains low and even, making the D-Cliques topology very well-suited to decentralized federated learning. \begin{figure}[t] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{figures/fully-connected-cliques} \caption{\label{fig:d-cliques-figure} D-Cliques (fully-connected cliques)} \end{subfigure} \hfill % To regenerate figure, from results/mnist % python ../../../../Software/non-iid-topology-simulator/tools/plot_convergence.py fully-connected/all/2021-03-10-09:25:19-CET no-init-no-clique-avg/fully-connected-cliques/all/2021-03-12-11:12:49-CET --add-min-max --yaxis test-accuracy --ymin 80 --ymax 92.5 --labels '100 nodes non-IID fully-connected' '100 nodes non-IID d-cliques' --save-figure ../../figures/d-cliques-mnist-vs-fully-connected.png --legend 'lower right' --font-size 16 --linestyles 'solid' 'dashed' \begin{subfigure}[b]{0.54\textwidth} \centering \includegraphics[width=\textwidth]{figures/d-cliques-mnist-vs-fully-connected.png} \caption{\label{fig:d-cliques-example-convergence-speed} Convergence Speed on MNIST} \end{subfigure} \caption{\label{fig:d-cliques-example} D-Cliques topology and convergence speed on MNIST.} \end{figure} Figure~\ref{fig:d-cliques-example-convergence-speed} illustrates the performance of D-Cliques on MNIST with $n=100$ nodes. Observe that the convergence speed is very close to that of a fully-connected topology, and significantly better than with a ring or a grid (see Figure~\ref{fig:iid-vs-non-iid-problem}). With 100 nodes, it offers a reduction of $\approx90\%$ in the number of edges compared to a fully-connected topology. Nonetheless, there is still significant variance in the accuracy across nodes, which is due to the bias introduced by inter-clique edges. We address this issue in the next section. \section{Optimizing with Clique Averaging and Momentum} \label{section:clique-averaging-momentum} In this section, we present Clique Averaging. This feature, when added to D-SGD, removes the bias caused by the inter-cliques edges of D-Cliques. We also show how it can be used to successfully implement momentum for non-IID data. \subsection{Clique Averaging: Debiasing Gradients from Inter-Clique Edges} \label{section:clique-averaging} While limiting the number of inter-clique connections reduces the amount of messages traveling on the network, it also introduces its own bias. Figure~\ref{fig:connected-cliques-bias} illustrates the problem on the simple case of two cliques connected by one inter-clique edge (here, between the green node of the left clique and the pink node of the right clique). Let us focus on node A. With weights computed as in \eqref{eq:metro}, node A's self-weight is $\frac{12} {110}$, the weight between A and the green node connected to B is $\frac{10}{110}$, and all other neighbors of A have a weight of $\frac{11}{110}$. Therefore, the gradient at A is biased towards its own class (pink) and against the green class. A similar bias holds for all other nodes without inter-clique edges with respect to their respective classes. For node B, all its edge weights (including its self-weight) are equal to $\frac{1} {11}$. However, the green class is represented twice (once as a clique neighbor and once from the inter-clique edge), while all other classes are represented only once. This biases the gradient toward the green class. The combined effect of these two sources of bias is to increase the variance of the local models across nodes. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{figures/connected-cliques-bias} \caption{\label{fig:connected-cliques-bias} Illustrating the bias induced by inter-clique connections (see main text).} \end{figure} We address this problem by adding \emph{Clique Averaging} to D-SGD (Algorithm~\ref{Algorithm:Clique-Unbiased-D-PSGD}), which essentially decouples gradient averaging from model averaging. The idea is to use only the gradients of neighbors within the same clique to compute the average gradient, providing an equal representation to all classes. In contrast, all neighbors' models, including those across inter-clique edges, participate in the model averaging step as in the original version. \begin{algorithm}[t] \caption{D-SGD with Clique Averaging, Node $i$} \label{Algorithm:Clique-Unbiased-D-PSGD} \begin{algorithmic}[1] \State \textbf{Require} initial model parameters $x_i^{(0)}$, learning rate $\gamma$, mixing weights $W$, mini-batch size $m$, number of steps $K$ \For{$k = 1,\ldots, K$} \State $s_i^{(k)} \gets \text{mini-batch sample of size $m$ drawn from~} D_i$ \State $g_i^{(k)} \gets \frac{1}{|\textit{Clique}(i)|}\sum_{j \in \textit{Clique(i)}} \nabla F(x_j^{(k-1)}; s_j^{(k)})$ \State $x_i^{(k-\frac{1}{2})} \gets x_i^{(k-1)} - \gamma g_i^{(k)}$ \State $x_i^{(k)} \gets \sum_{j \in N} W_{ji}^{(k)} x_j^{(k-\frac{1}{2})}$ \EndFor \end{algorithmic} \end{algorithm} % To regenerate figure, from results/mnist: % python ../../../../Software/non-iid-topology-simulator/tools/plot_convergence.py fully-connected/all/2021-03-10-09:25:19-CET no-init-no-clique-avg/fully-connected-cliques/all/2021-03-12-11:12:49-CET no-init/fully-connected-cliques/all/2021-03-12-11:12:01-CET --add-min-max --yaxis test-accuracy --labels '100 nodes non-IID fully-connected' '100 nodes non-IID d-cliques w/o clique avg.' '100 nodes d-cliques non-IID w/ clique avg.' --legend 'lower right' --ymin 89 --ymax 92.5 --font-size 13 --save-figure ../../figures/d-clique-mnist-clique-avg.png --linestyles 'solid' 'dashed' 'dotted' \begin{figure}[t] \centering \includegraphics[width=0.55\textwidth]{figures/d-clique-mnist-clique-avg} \caption{\label{fig:d-clique-mnist-clique-avg} Effect of Clique Averaging on MNIST. Y-axis starts at 89.} \end{figure} As illustrated in Figure~\ref{fig:d-clique-mnist-clique-avg}, this significantly reduces the variance of models across nodes and accelerates convergence to reach the same level as the one obtained with a fully-connected topology. Note that Clique Averaging induces a small additional cost, as gradients and models need to be sent in two separate rounds of messages. Nonetheless, compared to fully connecting all nodes, the total number of messages is reduced by $\approx 80\%$. \subsection{Implementing Momentum with Clique Averaging} \label{section:momentum} Efficiently training high capacity models usually requires additional optimization techniques. In particular, momentum~\cite{pmlr-v28-sutskever13} increases the magnitude of the components of the gradient that are shared between several consecutive steps, and is critical for deep convolutional networks like LeNet~\cite{lecun1998gradient,quagmire} to converge quickly. However, a direct application of momentum in a non-IID setting can actually be very detrimental. As illustrated in Figure~\ref{fig:d-cliques-cifar10-momentum-non-iid-effect} for the case of LeNet on CIFAR10 with 100 nodes, D-Cliques with momentum even fails to converge. Not using momentum actually gives a faster convergence, but there is a significant gap compared to the case of a single IID node with momentum. \begin{figure}[t] \centering % To regenerate figure, from results/cifar10 % python ../../../../Software/non-iid-topology-simulator/tools/plot_convergence.py 1-node-iid/all/2021-03-10-13:52:58-CET no-init-no-clique-avg/fully-connected-cliques/all/2021-03-13-18:34:35-CET no-init-no-clique-avg-no-momentum/fully-connected-cliques/all/2021-03-26-13:47:35-CET/ --legend 'upper right' --add-min-max --labels '1-node IID w/ momentum' '100 nodes non-IID d-cliques w/ momentum' '100 nodes non-IID d-cliques w/o momentum' --font-size 14 --yaxis test-accuracy --save-figure ../../figures/d-cliques-cifar10-momentum-non-iid-effect.png --ymax 100 --linestyles 'solid' 'dashed' 'dotted' \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{figures/d-cliques-cifar10-momentum-non-iid-effect} \caption{\label{fig:d-cliques-cifar10-momentum-non-iid-effect} Without Clique Averaging } \end{subfigure} \hfill % To regenerate figure, from results/cifar10 % python ../../../../Software/non-iid-topology-simulator/tools/plot_convergence.py 1-node-iid/all/2021-03-10-13:52:58-CET no-init/fully-connected-cliques/all/2021-03-13-18:32:55-CET --legend 'upper right' --add-min-max --labels '1-node IID w/ momentum' '100 nodes non-IID d-clique w/ momentum' --font-size 14 --yaxis test-accuracy --save-figure ../../figures/d-cliques-cifar10-momentum-non-iid-clique-avg-effect.png --ymax 100 --linestyles 'solid' 'dashed' 'dotted' \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{figures/d-cliques-cifar10-momentum-non-iid-clique-avg-effect} \caption{\label{fig:d-cliques-cifar10-momentum-non-iid-clique-avg-effect} With Clique Averaging} \end{subfigure} \caption{\label{fig:cifar10-momentum} Non-IID Effect of Momentum on CIFAR10 with LeNet} \end{figure} We show here that Clique Averaging (Section~\ref{section:clique-averaging}) allows us to compute an unbiased momentum from the unbiased average gradient $g_i^{(k)}$ of Algorithm~\ref{Algorithm:Clique-Unbiased-D-PSGD}: \begin{equation} v_i^{(k)} \leftarrow m v_i^{(k-1)} + g_i^{(k)} \end{equation} It then suffices to modify the original gradient step to use momentum: \begin{equation} x_i^{(k-\frac{1}{2})} \leftarrow x_i^{(k-1)} - \gamma v_i^{(k)} \end{equation} As shown in Figure~\ref{fig:d-cliques-cifar10-momentum-non-iid-clique-avg-effect}, this simple modification restores the benefits of momentum and closes the gap with the centralized setting. \section{Comparative Evaluation and Extensions} \label{section:non-clustered} In this section, we first compare D-Cliques to alternative topologies to confirm the relevance of our main design choices. Then, we evaluate some extensions of D-Cliques to further reduce the number of inter-clique connections so as to gracefully scale with the number of nodes. \subsection{Comparing D-Cliques to Other Sparse Topologies} We demonstrate the advantages of D-Cliques over alternative sparse topologies that have a similar number of edges. First, we consider topologies in which the neighbors of each node are selected at random (hence without any clique structure). Specifically, for $n=100$ nodes, we construct a random topology such that each node has exactly 10 edges, which is similar to the average 9.9 edges of our D-Cliques topology (Figure~\ref{fig:d-cliques-figure}). To better understand the role of the clique structure beyond merely ensuring class representativity among neighbors, we also compare to a random topology similar to the one described above except that edges are chosen such that each node has neighbors of all possible classes. Finally, we also implement an analog of Clique Averaging for these random topologies, where all nodes de-bias their gradient based on the class distribution of their neighbors. In the latter case, since nodes do not form a clique, each node obtains a different average gradient. The results for MNIST and CIFAR10 are shown in Figure~\ref{fig:d-cliques-comparison-to-non-clustered-topologies}. For MNIST, a purely random topology has higher variance and lower convergence speed than D-Cliques (with or without Clique Averaging), while a random topology with class representativity performs similarly as D-Cliques without Clique Averaging. However and perhaps surprisingly, a random topology with unbiased gradient performs slightly worse than without it. In any case, D-Cliques with Clique Averaging outperforms all random topologies, showing that the clique structure has a small but noticeable effect on the average accuracy and significantly reduces the variance across nodes in this setup. \begin{figure}[t] \centering \begin{subfigure}[b]{0.48\textwidth} % To regenerate the figure, from directory results/mnist % python ../../../../Software/non-iid-topology-simulator/tools/plot_convergence.py fully-connected-cliques/all/2021-03-10-10:19:44-CET no-init-no-clique-avg/fully-connected-cliques/all/2021-03-12-11:12:49-CET random-10/all/2021-07-23-11:59:56-CEST random-10-diverse/all/2021-03-17-20:28:35-CET --labels 'd-clique (fcc)' 'd-clique (fcc) no clique avg.' '10 random edges' '10 random edges (all classes represented)' --add-min-max --legend 'lower right' --ymin 80 --ymax 92.5 --yaxis test-accuracy --save-figure ../../figures/d-cliques-mnist-linear-comparison-to-non-clustered-topologies.png --font-size 13 --linestyles 'solid' 'dashed' 'dotted' 'dashdot' \centering \includegraphics[width=\textwidth]{figures/d-cliques-mnist-linear-comparison-to-non-clustered-topologies} \caption{MNIST with Linear Model} \end{subfigure} \hfill % To regenerate the figure, from directory results/cifar10 % python ../../../../Software/non-iid-topology-simulator/tools/plot_convergence.py no-init/fully-connected-cliques/all/2021-03-13-18:32:55-CET no-init-no-clique-avg/fully-connected-cliques/all/2021-03-13-18:34:35-CET random-10/all/2021-07-23-14:33:48-CEST random-10-diverse/all/2021-03-17-20:30:41-CET random-10-diverse-unbiased-gradient/all/2021-03-17-20:31:14-CET --labels 'd-clique (fcc) clique avg.' 'd-clique (fcc) no clique avg.' '10 random edges' '10 random edges (all classes repr.)' '10 random (all classes repr.) with unbiased grad.' --add-min-max --legend 'upper left' --yaxis test-accuracy --save-figure ../../figures/d-cliques-cifar10-linear-comparison-to-non-clustered-topologies.png --ymax 119 --font-size 13 --linestyles 'solid' 'dashed' 'dotted' 'dashdot' 'solid' --markers '' '' '' '' 'o' \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{figures/d-cliques-cifar10-linear-comparison-to-non-clustered-topologies} \caption{CIFAR10 with LeNet} \end{subfigure} \caption{\label{fig:d-cliques-comparison-to-non-clustered-topologies} Comparison to Non-Clustered Topologies} \end{figure} On the harder CIFAR10 dataset with a deep convolutional network, the differences are much more dramatic: D-Cliques with Clique Averaging and momentum turns out to be critical for fast convergence. Crucially, all random topologies fail to converge to a good solution. This confirms that our clique structure is important to reduce variance across nodes and improve the convergence. The difference with the previous experiment seems to be due to both the use of a higher capacity model and to the intrinsic characteristics of the datasets. While the previous experiments suggest that our clique structure is instrumental in obtaining good performance, one may wonder whether intra-clique full connectivity is actually necessary. Figure~\ref{fig:d-cliques-intra-connectivity} shows the convergence speed of a D-Cliques topology where cliques have been sparsified by randomly removing 1 or 5 undirected edges per clique (out of 45). Strikingly, both for MNIST and CIFAR10, removing just a single edge from the cliques has a significant effect on the convergence speed. On CIFAR10, it even entirely negates the benefits of D-Cliques. Overall, these results show that achieving fast convergence on non-IID data with sparse topologies requires a very careful design, as we have proposed with D-Cliques. \begin{figure}[t] \centering \begin{subfigure}[htbp]{0.48\textwidth} \centering % To regenerate the figure, from directory results/mnist % python ../../../../Software/non-iid-topology-simulator/tools/plot_convergence.py no-init/fully-connected-cliques/all/2021-03-12-11:12:01-CET rm-1-edge/all/2021-03-18-17:28:27-CET rm-1-edge-unbiased-grad/all/2021-03-18-17:28:47-CET --add-min-max --ymin 85 --ymax 92.5 --legend 'lower right' --yaxis test-accuracy --labels 'fcc, clique grad.' 'fcc -1 edge/clique, no clique grad.' 'fcc -1 edge/clique, clique grad.' --save-figure ../../figures/d-cliques-mnist-clique-clustering-fcc-minus-1-edge.png --font-size 13 --linestyle 'solid' 'dashed' 'dotted' \includegraphics[width=\textwidth]{figures/d-cliques-mnist-clique-clustering-fcc-minus-1-edge} \caption{\label{fig:d-cliques-mnist-clique-clustering-minus-1-edge} MNIST (-1 edge/clique)} \end{subfigure} \hfill \begin{subfigure}[htbp]{0.48\textwidth} \centering % To regenerate the figure, from directory results/cifar10 % python ../../../../Software/non-iid-topology-simulator/tools/plot_convergence.py no-init/fully-connected-cliques/all/2021-03-13-18:32:55-CET rm-1-edge/all/2021-03-18-17:29:58-CET rm-1-edge-unbiased-grad/all/2021-03-18-17:30:17-CET --add-min-max --ymax 80 --legend 'upper left' --yaxis test-accuracy --labels 'fcc, clique grad.' 'fcc -1 edge/clique, no clique grad.' 'fcc -1 edge/clique, clique grad.' --save-figure ../../figures/d-cliques-cifar10-clique-clustering-fcc-minus-1-edge.png --font-size 13 --linestyle 'solid' 'dashed' 'dotted' \includegraphics[width=\textwidth]{figures/d-cliques-cifar10-clique-clustering-fcc-minus-1-edge} \caption{\label{fig:d-cliques-cifar10-clique-clustering-minus-1-edge} CIFAR10 (-1 edge/clique)} \end{subfigure} \begin{subfigure}[htbp]{0.48\textwidth} \centering % To regenerate the figure, from directory results/mnist % python ../../../../Software/non-iid-topology-simulator/tools/plot_convergence.py no-init/fully-connected-cliques/all/2021-03-12-11:12:01-CET rm-5-edges/all/2021-03-18-17:29:10-CET rm-5-edges-unbiased-grad/all/2021-03-18-17:29:36-CET --add-min-max --ymin 85 --ymax 92.5 --legend 'lower right' --yaxis test-accuracy --labels 'fcc, clique grad.' 'fcc -5 edges/clique, no clique grad.' 'fcc -5 edges/clique, clique grad.' --save-figure ../../figures/d-cliques-mnist-clique-clustering-fcc-minus-5-edges.png --font-size 13 --linestyle 'solid' 'dashed' 'dotted' \includegraphics[width=\textwidth]{figures/d-cliques-mnist-clique-clustering-fcc-minus-5-edges} \caption{\label{fig:d-cliques-mnist-clique-clustering-minus-5-edges} MNIST (-5 edges/clique)} \end{subfigure} \hfill \begin{subfigure}[htbp]{0.48\textwidth} \centering % To regenerate the figure, from directory results/cifar10 % python ../../../../Software/non-iid-topology-simulator/tools/plot_convergence.py no-init/fully-connected-cliques/all/2021-03-13-18:32:55-CET rm-5-edges/all/2021-03-18-17:30:38-CET rm-5-edges-unbiased-grad/all/2021-03-18-17:31:04-CET --add-min-max --ymax 80 --legend 'upper left' --yaxis test-accuracy --labels 'fcc, clique grad.' 'fcc -5 edges/clique, no clique grad.' 'fcc -5 edges/clique, clique grad.' --save-figure ../../figures/d-cliques-cifar10-clique-clustering-fcc-minus-5-edges.png --font-size 13 --linestyle 'solid' 'dashed' 'dotted' \includegraphics[width=\textwidth]{figures/d-cliques-cifar10-clique-clustering-fcc-minus-5-edges} \caption{\label{fig:d-cliques-cifar10-clique-clustering-minus-5-edges} CIFAR10 (-5 edges/clique)} \end{subfigure} \caption{\label{fig:d-cliques-intra-connectivity} Importance of Intra-Clique Full-Connectivity} \end{figure} \subsection{Scaling up D-Cliques with Sparser Inter-Clique Topologies} \label{section:interclique-topologies} So far, we have used a fully-connected inter-clique topology for D-Cliques, which has the advantage of bounding the \textit{path length}\footnote{The \textit{path length} is the number of edges on the path with the shortest number of edges between two nodes.} to $3$ between any pair of nodes. This choice requires $ \frac{n}{c}(\frac{n}{c} - 1)$ inter-clique edges, which scales quadratically in the number of nodes $n$ for a given clique size $c$\footnote{We consider \textit{directed} edges in the analysis: the number of undirected edges is half and does not affect asymptotic behavior.}. This can become significant at larger scales when $n$ is large compared to $c$. In this last series of experiments, we evaluate the effect of choosing sparser inter-clique topologies on the convergence speed for a larger network of 1000 nodes. We compare the scalability and convergence speed of several D-Cliques variants, which all use $O(nc)$ edges to create cliques as a starting point. We first measure the convergence speed of inter-cliques topologies whose number of edges scales linearly with the number of nodes. Among those, the \textit{ring} has the (almost) fewest possible number of edges: it uses $\frac{2n}{c}$ inter-clique edges but its average path length between nodes also scales linearly. We also consider another topology, which we call \textit{fractal}, that provides a logarithmic bound on the average path length. In this hierarchical scheme, cliques are assembled in larger groups of $c$ cliques that are connected internally with one edge per pair of cliques, but with only one edge between pairs of larger groups. The topology is built recursively such that $c$ groups will themselves form a larger group at the next level up. This results in at most $c$ edges per node if edges are evenly distributed: i.e., each group within the same level adds at most $c-1$ edges to other groups, leaving one node per group with $c-1$ edges that can receive an additional edge to connect with other groups at the next level. Since nodes have at most $c$ edges, $n$ nodes have at most $nc$ edges, therefore the number of edges in this fractal scheme indeed scales linearly in the number of nodes. Second, we look at another scheme in which the number of edges scales in a near, but not quite, linear fashion. We propose to connect cliques according to a small-world-like topology~\cite{watts2000small} applied on top of a ring~\cite{stoica2003chord}. In this scheme, cliques are first arranged in a ring. Then each clique adds symmetric edges, both clockwise and counter-clockwise on the ring, with the $m$ closest cliques in sets of cliques that are exponentially bigger the further they are on the ring (see Algorithm~\ref{Algorithm:Smallworld} in the appendix for details on the construction). This ensures a good connectivity with other cliques that are close on the ring, while still keeping the average path length small. This scheme uses $\frac{n}{c}*2(m)\log(\frac{n}{c})$ inter-clique edges and therefore grows in the order of $O(n\log(n))$ with the number of nodes. Figure~\ref{fig:d-cliques-cifar10-convolutional} shows the convergence speed of all the above schemes on MNIST and CIFAR10, compared to the ideal baseline of a single IID node performing the same number of updates per epoch (representing the fastest convergence speed achievable if topology had no impact). Among the linear schemes, the ring topology converges but is much slower than our fractal scheme. Among the super-linear schemes, the small-world topology has a convergence speed that is almost the same as with a fully-connected inter-clique topology but with 22\% less edges (14.5 edges on average instead of 18.9). While the small-world inter-clique topology shows promising scaling behaviour, the fully-connected topology still offers significant benefits with 1000 nodes, as it represents a 98\% reduction in the number of edges compared to fully connecting individual nodes (18.9 edges on average instead of 999) and a 96\% reduction in the number of messages (37.8 messages per round per node on average instead of 999). We refer to Appendix~\ref{app:scaling} for additional results comparing the convergence speed across different number of nodes. Overall, these results show that D-Cliques can nicely scale with the number of nodes. \begin{figure}[t] \centering % To regenerate the figure, from directory results/mnist % python ../../../../Software/non-iid-topology-simulator/tools/plot_convergence.py 1-node-iid/all/2021-03-10-09:20:03-CET ../scaling/1000/mnist/fractal-cliques/all/2021-03-14-17:41:59-CET ../scaling/1000/mnist/clique-ring/all/2021-03-13-18:22:36-CET --add-min-max --yaxis test-accuracy --legend 'lower right' --ymin 84 --ymax 92.5 --labels '1 node IID' 'd-cliques (fractal)' 'd-cliques (ring)' --save-figure ../../figures/d-cliques-mnist-1000-nodes-comparison-linear.png --font-size 13 --linestyles 'solid' 'dashed' 'dotted' \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{figures/d-cliques-mnist-1000-nodes-comparison-linear} \caption{\label{fig:d-cliques-mnist-1000-nodes-comparison-linear} MNIST with Linear Model: Linear Inter-clique Topologies.} \end{subfigure} \hfill % To regenerate the figure, from directory results/cifar10 % python ../../../../Software/non-iid-topology-simulator/tools/plot_convergence.py 1-node-iid/all/2021-03-10-13:52:58-CET ../scaling/1000/cifar10/fractal-cliques/all/2021-03-14-17:42:46-CET ../scaling/1000/cifar10/clique-ring/all/2021-03-14-09:55:24-CET --add-min-max --yaxis test-accuracy --labels '1-node IID' 'd-cliques (fractal)' 'd-cliques (ring)' --legend 'lower right' --save-figure ../../figures/d-cliques-cifar10-1000-vs-1-node-test-accuracy-linear.png --font-size 13 --linestyles 'solid' 'dashed' 'dotted' \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{figures/d-cliques-cifar10-1000-vs-1-node-test-accuracy-linear} \caption{\label{fig:d-cliques-cifar10-1000-vs-1-node-test-accuracy-linear} CIFAR10 with LeNet Model: Linear Inter-clique Topologies.} \end{subfigure} % To regenerate the figure, from directory results/mnist % python ../../../../Software/non-iid-topology-simulator/tools/plot_convergence.py 1-node-iid/all/2021-03-10-09:20:03-CET ../scaling/1000/mnist/fully-connected-cliques/all/2021-03-14-17:56:26-CET ../scaling/1000/mnist/smallworld-logn-cliques/all/2021-03-23-21:45:39-CET --add-min-max --yaxis test-accuracy --legend 'lower right' --ymin 84 --ymax 92.5 --labels '1 node IID' 'd-cliques (fully-connected cliques)' 'd-cliques (smallworld)' --save-figure ../../figures/d-cliques-mnist-1000-nodes-comparison-super-linear.png --font-size 13 --linestyles 'solid' 'dashed' 'dotted' \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{figures/d-cliques-mnist-1000-nodes-comparison-super-linear} \caption{\label{fig:d-cliques-mnist-1000-nodes-comparison-super-linear} MNIST with Linear Model: Superlinear Inter-clique Topologies.} \end{subfigure} \hfill % To regenerate the figure, from directory results/cifar10 % python ../../../../Software/non-iid-topology-simulator/tools/plot_convergence.py 1-node-iid/all/2021-03-10-13:52:58-CET ../scaling/1000/cifar10/fully-connected-cliques/all/2021-03-14-17:41:20-CET ../scaling/1000/cifar10/smallworld-logn-cliques/all/2021-03-23-22:13:57-CET --add-min-max --yaxis test-accuracy --labels '1-node IID' 'd-cliques (fully-connected cliques)' 'd-cliques (smallworld)' --legend 'lower right' --save-figure ../../figures/d-cliques-cifar10-1000-vs-1-node-test-accuracy-super-linear.png --font-size 13 --linestyles 'solid' 'dashed' 'dotted' \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{figures/d-cliques-cifar10-1000-vs-1-node-test-accuracy-super-linear} \caption{\label{fig:d-cliques-cifar10-1000-vs-1-node-test-accuracy-super-linear} CIFAR10 with LeNet Model: Superlinear Inter-clique Topologies.} \end{subfigure} \caption{\label{fig:d-cliques-cifar10-convolutional} D-Cliques Convergence Speed with 1000 nodes, non-IID, Constant Updates per Epoch, with Different Inter-Clique Topologies.} \end{figure} \section{Related Work} \label{section:related-work} In this section, we review some related work on dealing with non-IID data in federated learning, and on the role of topology in fully decentralized algorithms. \paragraph{Dealing with non-IID data in server-based FL.} Non-IID data is not much of an issue in server-based FL if clients send their parameters to the server after each gradient update. Problems arise when one seeks to reduce the number of communication rounds by allowing each participant to perform multiple local updates, as in the popular FedAvg algorithm \cite{mcmahan2016communication}. Indeed, non-IID data can prevent such algorithms from converging to a good solution \cite{quagmire,scaffold}. This led to the design of algorithms that are specifically designed to mitigate the impact of non-IID data while performing multiple local updates, using adaptive client sampling \cite{quagmire}, update corrections \cite{scaffold} or regularization in the local objective \cite{fedprox}. Another direction is to embrace the non-IID scenario by learning personalized models for each client \cite{smith2017federated,perso_fl_mean,maml,moreau}. We note that recent work explores rings of server-based topologies \cite{tornado}, but the focus is not on dealing with non-IID data but to make server-based FL more scalable to a large number of clients. \paragraph{Dealing with non-IID data in fully decentralized FL.} Non-IID data is known to negatively impact the convergence speed of fully decentralized FL algorithms in practice \cite{jelasity}. Aside from approaches that aim to learn personalized models \cite{Vanhaesebrouck2017a,Zantedeschi2020a}, this motivated the design of algorithms with modified updates based on variance reduction \cite{tang18a}, momentum correction \cite{momentum_noniid}, cross-gradient aggregation \cite{cross_gradient}, or multiple averaging steps between updates (see \cite{consensus_distance} and references therein). These algorithms typically require significantly more communication and/or computation, and have only been evaluated on small-scale networks with a few tens of nodes.\footnote{We also observed that \cite{tang18a} is subject to numerical instabilities when run on topologies other than rings. When the rows and columns of $W$ do not exactly sum to $1$ (due to finite precision), these small differences get amplified by the proposed updates and make the algorithm diverge.} In contrast, D-Cliques focuses on the design of a sparse topology which is able to compensate for the effect of non-IID data and scales to large networks. We do not modify the simple and efficient D-SGD algorithm \cite{lian2017d-psgd} beyond removing some neighbor contributions that otherwise bias the gradient direction. \paragraph{Impact of topology in fully decentralized FL.} It is well known that the choice of network topology can affect the convergence of fully decentralized algorithms. In theoretical convergence rates, this is typically accounted for by a dependence on the spectral gap of the network, see for instance \cite{Duchi2012a,Colin2016a,lian2017d-psgd,Nedic18}. However, for IID data, practice contradicts these classic results as fully decentralized algorithms have been observed to converge essentially as fast on sparse topologies like rings or grids as they do on a fully connected network \cite{lian2017d-psgd,Lian2018}. Recent work \cite{neglia2020,consensus_distance} sheds light on this phenomenon with refined convergence analyses based on differences between gradients or parameters across nodes, which are typically smaller in the IID case. However, these results do not give any clear insight regarding the role of the topology in the non-IID case. We note that some work has gone into designing efficient topologies to optimize the use of network resources (see e.g., \cite{marfoq}), but the topology is chosen independently of how data is distributed across nodes. In summary, the role of topology in the non-IID data scenario is not well understood and we are not aware of prior work focusing on this question. Our work is the first to show that an appropriate choice of data-dependent topology can effectively compensate for non-IID data. \section{Conclusion} \label{section:conclusion} We proposed D-Cliques, a sparse topology that recovers the convergence speed of a fully-connected network in the presence of local class bias. D-Cliques is based on assembling subsets of nodes into cliques such that the clique-level class distribution is representative of the global distribution, thereby locally recovering IIDness. Cliques are joined in a sparse inter-clique topology so that they quickly converge to the same model. We proposed Clique Averaging to remove the non-IID bias in gradient computation by averaging gradients only with other nodes within the clique. Clique Averaging can in turn be used to implement unbiased momentum to recover the convergence speed usually only possible with IID mini-batches. Through our experiments, we showed that the clique structure of D-Cliques is critical in obtaining these results and that a small-world inter-clique topology with only $O(n \log (n))$ edges achieves the best compromise between convergence speed and scalability with the number of nodes. D-Cliques thus appears to be very promising to reduce bandwidth usage on FL servers and to implement fully decentralized alternatives in a wider range of applications where global coordination is impossible or costly. For instance, the presence and relative frequency of classes in each node could be computed using PushSum~\cite{kempe2003gossip}, and the topology could be constructed in a decentralized and adaptive way with PeerSampling~\cite{jelasity2007gossip}. This will be investigated in future work. We also believe that our ideas can be useful to deal with more general types of data non-IIDness beyond the important case of local class bias that we studied in this paper. An important example is covariate shift or feature distribution skew \cite{kairouz2019advances}, for which local density estimates could be used as basis to construct cliques that approximately recover the global distribution. \section{Acknowledgments} \label{section:acknowledgement} This research was partially supported by French grants ANR-16-CE23-0016 (Project PAMELA) and ANR-20-CE23-0015 (Project PRIDE), and by the European Union's Horizon 2020 Research and Innovation Program under Grant Agreement No. 825081 COMPRISE. % % ---- Bibliography ---- % % BibTeX users should specify bibliography style 'splncs04'. % References will then be sorted and formatted in the correct style. % \bibliographystyle{splncs04} \bibliography{main} \newpage \appendix \section{Detailed Algorithms} We present a more detailed and precise explanation of the two main algorithms of the paper, for D-Cliques construction (Algorithm~\ref{Algorithm:D-Clique-Construction}) and to establish a small-world inter-clique topology (Algorithm~\ref{Algorithm:Smallworld}). \subsection{D-Cliques Construction} Algorithm~\ref{Algorithm:D-Clique-Construction} shows the overall approach for constructing a D-Cliques topology in the non-IID case.\footnote{An IID version of D-Cliques, in which each node has an equal number of examples of all classes, can be implemented by picking $\#L$ nodes per clique at random.} It expects the following inputs: $L$, the set of all classes present in the global distribution $D = \bigcup_{i \in N} D_i$; $N$, the set of all nodes; a function $classes(S)$, which given a subset $S$ of nodes in $N$ returns the set of classes in their joint local distributions ($D_S = \bigcup_{i \in S} D_i$); a function $intraconnect(DC)$, which given $DC$, a set of cliques (set of set of nodes), creates a set of edges ($\{\{i,j\}, \dots \}$) connecting all nodes within each clique to one another; a function $interconnect(DC)$, which given a set of cliques, creates a set of edges ($\{\{i,j\}, \dots \}$) connecting nodes belonging to different cliques; and a function $weigths(E)$, which given a set of edges, returns the weighted matrix $W_{ij}$. Algorithm~\ref{Algorithm:D-Clique-Construction} returns both $W_{ij}$, for use in D-SGD (Algorithm~\ref{Algorithm:D-PSGD} and~\ref{Algorithm:Clique-Unbiased-D-PSGD}), and $DC$, for use with Clique Averaging (Algorithm~\ref{Algorithm:Clique-Unbiased-D-PSGD}). \begin{algorithm}[h] \caption{D-Cliques Construction} \label{Algorithm:D-Clique-Construction} \begin{algorithmic}[1] \State \textbf{Require:} set of classes globally present $L$, \State~~ set of all nodes $N = \{ 1, 2, \dots, n \}$, \State~~ fn $\textit{classes}(S)$ that returns the classes present in a subset of nodes $S$, \State~~ fn $\textit{intraconnect}(DC)$ that returns edges intraconnecting cliques of $DC$, \State~~ fn $\textit{interconnect}(DC)$ that returns edges interconnecting cliques of $DC$ (Sec.~\ref{section:interclique-topologies}) \State~~ fn $\textit{weights}(E)$ that assigns weights to edges in $E$ \State $R \leftarrow \{ n~\text{for}~n \in N \}$ \Comment{Remaining nodes} \State $DC \leftarrow \emptyset$ \Comment{D-Cliques} \State $\textit{C} \leftarrow \emptyset$ \Comment{Current Clique} \While{$R \neq \emptyset$} \State $n \leftarrow \text{pick}~1~\text{from}~\{ m \in R | \textit{classes}(\{m\}) \subsetneq \textit{classes}(\textit{C}) \}$ \State $R \leftarrow R \setminus \{ n \}$ \State $C \leftarrow C \cup \{ n \}$ \If{$\textit{classes}(C) = L$} \State $DC \leftarrow DC \cup \{ C \}$ \State $C \leftarrow \emptyset$ \EndIf \EndWhile \State \Return $(weights(\textit{intraconnect}(DC) \cup \textit{interconnect}(DC)), DC)$ \end{algorithmic} \end{algorithm} The implementation builds a single clique by adding nodes with different classes until all classes of the global distribution are represented. Each clique is built sequentially until all nodes are parts of cliques. Because all classes are represented on an equal number of nodes, all cliques will have nodes of all classes. Furthermore, since nodes have examples of a single class, we are guaranteed a valid assignment is possible in a greedy manner. After cliques are created, edges are added and weights are assigned to edges, using the corresponding input functions. \subsection{Small-world Inter-clique Topology} Algorithm~\ref{Algorithm:Smallworld} instantiates the function \textit{interconnect} with a small-world inter-clique topology as described in Section~\ref{section:interclique-topologies}. It adds a linear number of inter-clique edges by first arranging cliques on a ring. It then adds a logarithmic number of ``finger'' edges to other cliques on the ring chosen such that there is a constant number of edges added per set, on sets that are exponentially bigger the further away on the ring. ``Finger'' edges are added symmetrically on both sides of the ring to the cliques in each set that are closest to a given set. \begin{algorithm}[h] \caption{$\textit{smallworld}(DC)$: adds $O(\# N \log(\# N))$ edges} \label{Algorithm:Smallworld} \begin{algorithmic}[1] \State \textbf{Require:} set of cliques $DC$ (set of set of nodes) \State ~~size of neighborhood $ns$ (default 2) \State ~~function $\textit{least\_edges}(S, E)$ that returns one of the nodes in $S$ with the least number of edges in $E$ \State $E \leftarrow \emptyset$ \Comment{Set of Edges} \State $L \leftarrow [ C~\text{for}~C \in DC ]$ \Comment{Arrange cliques in a list} \For{$i \in \{1,\dots,\#DC\}$} \Comment{For every clique} \State \Comment{For sets of cliques exponentially further away from $i$} \For{$\textit{offset} \in \{ 2^x~\text{for}~x~\in \{ 0, \dots, \lceil \log_2(\#DC) \rceil \} \}$} \State \Comment{Pick the $ns$ closest} \For{$k \in \{0,\dots,ns-1\}$} \State \Comment{Add inter-clique connections in both directions} \State $n \leftarrow \textit{least\_edges}(L_i, E)$ \State $m \leftarrow \textit{least\_edges}(L_{(i+\textit{offset}+k) \% \#DC}, E)$ \Comment{clockwise in ring} \State $E \leftarrow E \cup \{ \{n,m\} \}$ \State $n \leftarrow \textit{least\_edges}(L_i, E)$ \State $m \leftarrow \textit{least\_edges}(L_{(i-\textit{offset}-k)\% \#DC} , E)$ \Comment{counter-clockwise in ring} \State $E \leftarrow E \cup \{ \{n,m\} \}$ \EndFor \EndFor \EndFor \State \Return E \end{algorithmic} \end{algorithm} Algorithm~\ref{Algorithm:Smallworld} expects a set of cliques $DC$, previously computed by Algorithm~\ref{Algorithm:D-Clique-Construction}; a size of neighborhood $ns$, which is the number of finger edges to add per set of cliques, and a function \textit{least\_edges}, which given a set of nodes $S$ and an existing set of edges $E = \{\{i,j\}, \dots \}$, returns one of the nodes in $E$ with the least number of edges. It returns a new set of edges $\{\{i,j\}, \dots \}$ with all edges added by the small-world topology. The implementation first arranges the cliques of $DC$ in a list, which represents the ring. Traversing the list with increasing indices is equivalent to traversing the ring in the clockwise direction, and inversely. Then, for every clique $i$ on the ring from which we are computing the distance to others, a number of edges are added. All other cliques are implicitly arranged in mutually exclusive sets, with size and at offset exponentially bigger (doubling at every step). Then for every of these sets, $ns$ edges are added, both in the clockwise and counter-clockwise directions, always on the nodes with the least number of edges in each clique. The ring edges are implicitly added to the cliques at offset $1$ in both directions. % \section{Additional Experiments} % \subsection{Effect of Clique Averaging and Uniform Initialization} % Section~\ref{section:clique-averaging} explained how Clique Averaging reduces bias and showed that Clique Averaging was significantly beneficial on MNIST with fully-connected D-Cliques. In this section, we provide additional results for the ring topology, as well as for CIFAR10. In addition, during our early exploration, we noticed that ensuring \textit{uniform initialization}, i.e. ensuring that all nodes start with the same model, increased convergence speed when connecting two cliques with 1-2 interclique edges. We therefore also verify whether this effect is still significant with 10 cliques (100 nodes), on a ring and with full connections between cliques, as well as on MNIST and CIFAR10. We also verified what interaction this had with Clique Averaging. % Figure~\ref{fig:d-cliques-mnist-init-clique-avg-effect} shows all the results for MNIST. Comparing Figure~\ref{fig:d-cliques-mnist-init-clique-avg-effect-ring-test-accuracy} to~\ref{fig:d-cliques-mnist-no-init-clique-avg-effect-ring-test-accuracy}, and Figure~\ref{fig:d-cliques-mnist-init-clique-avg-effect-fcc-test-accuracy} to~\ref{fig:d-cliques-mnist-no-init-clique-avg-effect-fcc-test-accuracy} together, we see that Uniform Initialization has imperceptible effects. However, for all four sub-figures, using Clique Averaging has a slightly better average convergence speed, and significantly lower variance between nodes, than not using it. Moreover, the improvement is larger with Fully-Connected D-Cliques. % \begin{figure}[htbp] % \centering % % To regenerate the figure, from directory results/mnist % % python ../../../learn-topology/tools/plot_convergence.py clique-ring/all/2021-03-10-18:14:35-CET no-clique-avg/clique-ring/all/2021-03-12-10:40:37-CET --add-min-max --yaxis test-accuracy --labels 'with clique avg.' 'without clique avg.' --legend 'lower right' --ymin 85 --ymax 92.5 --save-figure ../../figures/d-cliques-mnist-init-clique-avg-effect-ring-test-accuracy.png % \begin{subfigure}[b]{0.48\textwidth} % \centering % \includegraphics[width=\textwidth]{figures/d-cliques-mnist-init-clique-avg-effect-ring-test-accuracy} % \caption{\label{fig:d-cliques-mnist-init-clique-avg-effect-ring-test-accuracy} D-Cliques (Ring), with Uniform Initialization} % \end{subfigure} % \quad % % To regenerate the figure, from directory results/mnist % % python ../../../learn-topology/tools/plot_convergence.py no-init/clique-ring/all/2021-03-12-10:40:11-CET no-init-no-clique-avg/clique-ring/all/2021-03-12-10:41:03-CET --add-min-max --yaxis test-accuracy --labels 'with clique avg.' 'without clique avg.' --legend 'lower right' --ymin 85 --ymax 92.5 --save-figure ../../figures/d-cliques-mnist-no-init-clique-avg-effect-ring-test-accuracy.png % \begin{subfigure}[b]{0.48\textwidth} % \centering % \includegraphics[width=\textwidth]{figures/d-cliques-mnist-no-init-clique-avg-effect-ring-test-accuracy} % \caption{\label{fig:d-cliques-mnist-no-init-clique-avg-effect-ring-test-accuracy} D-Cliques (Ring), without Uniform Initialization} % \end{subfigure} % % To regenerate the figure, from directory results/mnist % %python ../../../learn-topology/tools/plot_convergence.py fully-connected-cliques/all/2021-03-10-10:19:44-CET no-clique-avg/fully-connected-cliques/all/2021-03-12-11:12:26-CET --add-min-max --yaxis test-accuracy --labels 'with clique avg.' 'without clique avg.' --legend 'lower right' --ymin 85 --ymax 92.5 --save-figure ../../figures/d-cliques-mnist-init-clique-avg-effect-fcc-test-accuracy.png % \begin{subfigure}[b]{0.48\textwidth} % \centering % \includegraphics[width=\textwidth]{figures/d-cliques-mnist-init-clique-avg-effect-fcc-test-accuracy} % \caption{\label{fig:d-cliques-mnist-init-clique-avg-effect-fcc-test-accuracy} D-Cliques (Fully-Connected), with Uniform Initialization} % \end{subfigure} % \quad % % To regenerate the figure, from directory results/mnist % %python ../../../learn-topology/tools/plot_convergence.py no-init/fully-connected-cliques/all/2021-03-12-11:12:01-CET no-init-no-clique-avg/fully-connected-cliques/all/2021-03-12-11:12:49-CET --add-min-max --yaxis test-accuracy --labels 'with clique avg.' 'without clique avg.' --legend 'lower right' --ymin 85 --ymax 92.5 --save-figure ../../figures/d-cliques-mnist-no-init-clique-avg-effect-fcc-test-accuracy.png % \begin{subfigure}[b]{0.48\textwidth} % \centering % \includegraphics[width=\textwidth]{figures/d-cliques-mnist-no-init-clique-avg-effect-fcc-test-accuracy} % \caption{\label{fig:d-cliques-mnist-no-init-clique-avg-effect-fcc-test-accuracy} D-Cliques (Fully-Connected), without Uniform Initialization} % \end{subfigure} % \caption{\label{fig:d-cliques-mnist-init-clique-avg-effect} MNIST: Effects of Clique Averaging and Uniform Initialization on Convergence Speed. (100 nodes, non-IID, D-Cliques, bsz=128)} % \end{figure} % Figure~\ref{fig:d-cliques-cifar10-init-clique-avg-effect} shows all the results for CIFAR10. One the one hand, with D-Cliques arranged in a ring, uniform initialization has a small but positive effect on convergence speed, whether Clique Averaging is used or not. With fully-connected D-Cliques, the effect is significantly smaller and almost negligible, both with and without Clique Averaging. On the other hand, Clique Averaging is always beneficial, by a significantly larger margin for both interclique topologies and with and without uniform initialization. Moreover, the effect is stronger than for MNIST. % \begin{figure}[htbp] % \centering % % To regenerate the figure, from directory results/cifar10 % % python ../../../learn-topology/tools/plot_convergence.py clique-ring/all/2021-03-10-11:58:43-CET no-init/clique-ring/all/2021-03-13-18:28:30-CET no-clique-avg/clique-ring/all/2021-03-13-18:27:09-CET no-init-no-clique-avg/clique-ring/all/2021-03-13-18:29:58-CET --add-min-max --yaxis test-accuracy --labels 'with clique avg., with uniform init.' 'with clique avg., without uniform init.' 'without clique avg., with uniform init.' 'without clique avg., without uniform init.' --legend 'upper left' --ymax 115 --save-figure ../../figures/d-cliques-cifar10-init-clique-avg-effect-ring-test-accuracy.png --font-size 15 % \begin{subfigure}[b]{0.48\textwidth} % \centering % \includegraphics[width=\textwidth]{figures/d-cliques-cifar10-init-clique-avg-effect-ring-test-accuracy} % \caption{\label{fig:d-cliques-cifar10-init-clique-avg-effect-ring-test-accuracy} D-Cliques (Ring)} % \end{subfigure} % % To regenerate the figure, from directory results/cifar10 % %python ../../../learn-topology/tools/plot_convergence.py fully-connected-cliques/all/2021-03-10-13:58:57-CET no-init/fully-connected-cliques/all/2021-03-13-18:32:55-CET no-clique-avg/fully-connected-cliques/all/2021-03-13-18:31:36-CET no-init-no-clique-avg/fully-connected-cliques/all/2021-03-13-18:34:35-CET --add-min-max --yaxis test-accuracy --labels 'with clique avg., with uniform init.' 'with clique avg., without uniform init.' 'without clique avg., with uniform init.' 'without clique avg., without uniform init.' --legend 'upper left' --ymax 115 --save-figure ../../figures/d-cliques-cifar10-init-clique-avg-effect-fcc-test-accuracy.png --font-size 15 % \begin{subfigure}[b]{0.48\textwidth} % \centering % \includegraphics[width=\textwidth]{figures/d-cliques-cifar10-init-clique-avg-effect-fcc-test-accuracy} % \caption{\label{fig:d-cliques-cifar10-init-clique-avg-effect-fcc-test-accuracy} D-Cliques (Fully-Connected)} % \end{subfigure} % \caption{\label{fig:d-cliques-cifar10-init-clique-avg-effect} CIFAR10: Effects of Clique Averaging and Uniform Initialization on Convergence Speed. (100 nodes, non-IID, D-Cliques, bsz=20, momentum=0.9)} % \end{figure} % We conclude that Uniform Initialization is not so important for convergence speed but that Clique Averaging is always significantly so. % \subsection{Comparison to Non-Clustered Topologies} % % \begin{figure} %\centering % \begin{subfigure}[htb]{0.48\textwidth} %% To regenerate the figure, from directory results/mnist/gn-lenet %% python ../../../../learn-topology/tools/plot_convergence.py no-init/all/2021-03-22-21:39:54-CET no-init-no-clique-avg/all/2021-03-22-21:40:16-CET random-10/all/2021-03-22-21:41:06-CET random-10-diverse/all/2021-03-22-21:41:46-CET random-10-diverse-unbiased-grad/all/2021-03-22-21:42:04-CET --legend 'lower right' --add-min-max --labels 'd-clique (fcc) clique avg.' 'd-clique (fcc) no clique avg.' '10 random edges' '10 random edges (all classes repr.)' '10 random edges (all classes repr.) with unbiased grad.' --ymin 80 --yaxis test-accuracy --save-figure ../../../figures/d-cliques-mnist-lenet-comparison-to-non-clustered-topologies.png % \includegraphics[width=\textwidth]{figures/d-cliques-mnist-lenet-comparison-to-non-clustered-topologies} % \caption{\label{fig:d-cliques-mnist-lenet-comparison-to-non-clustered-topologies} LeNet Model} % \end{subfigure} % \hfill % \begin{subfigure}[htb]{0.48\textwidth} %% To regenerate the figure, from directory results/mnist/gn-lenet %% python ../../../../learn-topology/tools/plot_convergence.py no-init/all/2021-03-22-21:39:54-CET no-init-no-clique-avg/all/2021-03-22-21:40:16-CET random-10/all/2021-03-22-21:41:06-CET random-10-diverse/all/2021-03-22-21:41:46-CET random-10-diverse-unbiased-grad/all/2021-03-22-21:42:04-CET --legend 'upper right' --add-min-max --labels 'd-clique (fcc) clique avg.' 'd-clique (fcc) no clique avg.' '10 random edges' '10 random edges (all classes repr.)' '10 random edges (all classes repr.) with unbiased grad.' --ymax 0.7 --yaxis scattering --save-figure ../../../figures/d-cliques-mnist-lenet-comparison-to-non-clustered-topologies-scattering.png % \includegraphics[width=\textwidth]{figures/d-cliques-mnist-lenet-comparison-to-non-clustered-topologies-scattering} % \caption{\label{fig:d-cliques-mnist-lenet-comparison-to-non-clustered-topologies-scattering} LeNet Model (Scattering)} % \end{subfigure} % % \caption{\label{fig:d-cliques-mnist-comparison-to-non-clustered-topologies} MNIST: Comparison to non-Clustered Topologies} %\end{figure} % % \begin{figure} % \centering % % To regenerate the figure, from directory results/cifar10 %% python ../../../learn-topology/tools/plot_convergence.py fully-connected-cliques/all/2021-03-10-13:58:57-CET no-init-no-clique-avg/fully-connected-cliques/all/2021-03-13-18:34:35-CET random-10/all/2021-03-17-20:30:03-CET random-10-diverse/all/2021-03-17-20:30:41-CET random-10-diverse-unbiased-gradient/all/2021-03-17-20:31:14-CET random-10-diverse-unbiased-gradient-uniform-init/all/2021-03-17-20:31:41-CET --labels 'd-clique (fcc) clique avg., uniform init.' 'd-clique (fcc) no clique avg. no uniform init.' '10 random edges' '10 random edges (all classes repr.)' '10 random (all classes repr.) with unbiased grad.' '10 random (all classes repr.) with unbiased grad., uniform init.' --add-min-max --legend 'upper right' --yaxis scattering --save-figure ../../figures/d-cliques-cifar10-linear-comparison-to-non-clustered-topologies-scattering.png --ymax 0.7 % \begin{subfigure}[b]{0.48\textwidth} % \centering % \includegraphics[width=\textwidth]{figures/d-cliques-cifar10-linear-comparison-to-non-clustered-topologies-scattering} % \caption{\label{fig:d-cliques-cifar10-linear-comparison-to-non-clustered-topologies-scattering} LeNet Model: Scattering} % \end{subfigure} % %\caption{\label{fig:d-cliques-cifar10-linear-comparison-to-non-clustered-topologies} CIFAR10: Comparison to non-Clustered Topologies} %\end{figure} % % %\begin{itemize} % \item Clustering does not seem to make a difference in MNIST, even when using a higher-capacity model (LeNet) instead of a linear model. (Fig.\ref{fig:d-cliques-mnist-comparison-to-non-clustered-topologies}) % \item Except for the random 10 topology, convergence speed seems to be correlated with scattering in CIFAR-10 with LeNet model (Fig.\ref{fig:d-cliques-cifar10-linear-comparison-to-non-clustered-topologies}). There is also more difference between topologies both in convergence speed and scattering than for MNIST (Fig.~\ref{fig:d-cliques-mnist-comparison-to-non-clustered-topologies}). Scattering computed similar to Consensus Control for Decentralized Deep Learning~\cite{consensus_distance}. %\end{itemize} % % % \clearpage \section{Additional Experiments on Scaling Behavior with Increasing Number of Nodes} \label{app:scaling} Section~\ref{section:interclique-topologies} compares the convergence speed of various inter-clique topologies at a scale of 1000 nodes. In this section, we show the effect of scaling the number of nodes, by comparing the convergence speed with 1, 10, 100, and 1000 nodes, and adjusting the batch size to maintain a constant number of updates per epoch. We present results for Ring, Fractal, Small-world, and Fully-Connected inter-clique topologies. Figure~\ref{fig:d-cliques-mnist-scaling-fully-connected} shows the results for MNIST. For all topologies, we notice a perfect scaling up to 100 nodes, i.e. the accuracy curves overlap, with low variance between nodes. Starting at 1000 nodes, there is a significant increase in variance between nodes and the convergence is slower, only marginally for Fully-Connected but significantly so for Fractal and Ring. Small-world has higher variance between nodes but maintains a convergence speed close to that of Fully-Connected. Figure~\ref{fig:d-cliques-cifar10-scaling-fully-connected} shows the results for CIFAR10. When increasing from 1 to 10 nodes (resulting in a single fully-connected clique), there is actually a small increase both in final accuracy and convergence speed. We believe this increase is due to the gradient being computed with exactly the same number of examples from all classes with 10 fully-connected non-IID nodes, while the gradient for a single non-IID node may have a slightly larger bias because the random sampling does not guarantee the representation of all classes perfectly in each batch. At a scale of 100 nodes, there is no difference between Fully-Connected and Fractal, as the connections are the same; however, a Ring already shows a significantly slower convergence. At 1000 nodes, the convergence significantly slows down for Fractal and Ring, while remaining close, albeit with a larger variance, for Fully-Connected. Similar to MNIST, Small-world has higher variance and slightly lower convergence speed than Fully-Connected but remains very close. We therefore conclude that Fully-Connected and Small-world have good scaling properties in terms of convergence speed, and that the linear-logarithmic number of edges of Small-world makes it the best compromise between convergence speed and connectivity, and thus the best choice for efficient large-scale decentralized learning in practice. \begin{figure}[htbp] \centering % To regenerate the figure, from directory results/scaling % python ../../../learn-topology/tools/plot_convergence.py ../mnist/1-node-iid/all/2021-03-10-09:20:03-CET 10/mnist/fully-connected-cliques/all/2021-03-12-09:13:27-CET ../mnist/fully-connected-cliques/all/2021-03-10-10:19:44-CET 1000/mnist/fully-connected-cliques/all/2021-03-14-17:56:26-CET --labels '1 node IID bsz=12800' '10 nodes bsz=1280' '100 nodes bsz=128' '1000 nodes bsz=13' --legend 'lower right' --yaxis test-accuracy --save-figure ../../figures/d-cliques-mnist-scaling-fully-connected-cst-updates.png --ymin 80 --add-min-max \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{figures/d-cliques-mnist-scaling-fully-connected-cst-updates} \caption{Fully-Connected} \end{subfigure} \quad % To regenerate the figure, from directory results/scaling % python ../../../learn-topology/tools/plot_convergence.py ../mnist/1-node-iid/all/2021-03-10-09:20:03-CET 10/mnist/fully-connected-cliques/all/2021-03-12-09:13:27-CET ../mnist/smallworld-logn-cliques/all/2021-03-23-21:44:56-CET 1000/mnist/smallworld-logn-cliques/all/2021-03-23-21:45:39-CET --labels '1 node IID bsz=12800' '10 nodes bsz=1280' '100 nodes bsz=128' '1000 nodes bsz=13' --legend 'lower right' --yaxis test-accuracy --save-figure ../../figures/d-cliques-mnist-scaling-smallworld-cst-updates.png --ymin 80 --add-min-max \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{figures/d-cliques-mnist-scaling-smallworld-cst-updates} \caption{Small-world} \end{subfigure} \quad % To regenerate the figure, from directory results/scaling % python ../../../learn-topology/tools/plot_convergence.py ../mnist/1-node-iid/all/2021-03-10-09:20:03-CET 10/mnist/clique-ring/all/2021-03-13-18:22:01-CET ../mnist/fully-connected-cliques/all/2021-03-10-10:19:44-CET 1000/mnist/fractal-cliques/all/2021-03-14-17:41:59-CET --labels '1 node IID bsz=12800' '10 nodes bsz=1280' '100 nodes bsz=128' '1000 nodes bsz=13' --legend 'lower right' --yaxis test-accuracy --save-figure ../../figures/d-cliques-mnist-scaling-fractal-cliques-cst-updates.png --ymin 80 --add-min-max \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{figures/d-cliques-mnist-scaling-fractal-cliques-cst-updates} \caption{Fractal} \end{subfigure} \quad % To regenerate the figure, from directory results/scaling % python ../../../learn-topology/tools/plot_convergence.py ../mnist/1-node-iid/all/2021-03-10-09:20:03-CET 10/mnist/clique-ring/all/2021-03-13-18:22:01-CET ../mnist/clique-ring/all/2021-03-10-18:14:35-CET 1000/mnist/clique-ring/all/2021-03-13-18:22:36-CET --labels '1 node IID bsz=12800' '10 nodes bsz=1280' '100 nodes bsz=128' '1000 nodes bsz=13' --legend 'lower right' --yaxis test-accuracy --save-figure ../../figures/d-cliques-mnist-scaling-clique-ring-cst-updates.png --ymin 80 --add-min-max \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{figures/d-cliques-mnist-scaling-clique-ring-cst-updates} \caption{Ring} \end{subfigure} \caption{\label{fig:d-cliques-mnist-scaling-fully-connected} MNIST: D-Cliques scaling behavior (constant updates per epoch) for different inter-clique topologies.} \end{figure} \begin{figure}[htbp] \centering % To regenerate the figure, from directory results/scaling % python ../../../learn-topology/tools/plot_convergence.py ../cifar10/1-node-iid/all/2021-03-10-13:52:58-CET 10/cifar10/fully-connected-cliques/all/2021-03-13-19:06:02-CET ../cifar10/fully-connected-cliques/all/2021-03-10-13:58:57-CET 1000/cifar10/fully-connected-cliques/all/2021-03-14-17:41:20-CET --labels '1 node IID bsz=2000' '10 nodes non-IID bsz=200' '100 nodes non-IID bsz=20' '1000 nodes non-IID bsz=2' --legend 'lower right' --yaxis test-accuracy --save-figure ../../figures/d-cliques-cifar10-scaling-fully-connected-cst-updates.png --add-min-max \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{figures/d-cliques-cifar10-scaling-fully-connected-cst-updates} \caption{Fully-Connected} \end{subfigure} \quad % python ../../../learn-topology/tools/plot_convergence.py ../cifar10/1-node-iid/all/2021-03-10-13:52:58-CET 10/cifar10/fully-connected-cliques/all/2021-03-13-19:06:02-CET ../cifar10/smallworld-logn-cliques/all/2021-03-23-22:13:23-CET 1000/cifar10/smallworld-logn-cliques/all/2021-03-23-22:13:57-CET --labels '1 node IID bsz=2000' '10 nodes non-IID bsz=200' '100 nodes non-IID bsz=20' '1000 nodes non-IID bsz=2' --legend 'lower right' --yaxis test-accuracy --save-figure ../../figures/d-cliques-cifar10-scaling-smallworld-cst-updates.png --add-min-max \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{figures/d-cliques-cifar10-scaling-smallworld-cst-updates} \caption{Small-world} \end{subfigure} % To regenerate the figure, from directory results/scaling % python ../../../learn-topology/tools/plot_convergence.py ../cifar10/1-node-iid/all/2021-03-10-13:52:58-CET 10/cifar10/fully-connected-cliques/all/2021-03-13-19:06:02-CET ../cifar10/fully-connected-cliques/all/2021-03-10-13:58:57-CET 1000/cifar10/fractal-cliques/all/2021-03-14-17:42:46-CET --labels '1 node IID bsz=2000' '10 nodes non-IID bsz=200' '100 nodes non-IID bsz=20' '1000 nodes non-IID bsz=2' --legend 'lower right' --yaxis test-accuracy --save-figure ../../figures/d-cliques-cifar10-scaling-fractal-cliques-cst-updates.png --add-min-max \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{figures/d-cliques-cifar10-scaling-fractal-cliques-cst-updates} \caption{Fractal} \end{subfigure} \quad % To regenerate the figure, from directory results/scaling % python ../../../learn-topology/tools/plot_convergence.py ../cifar10/1-node-iid/all/2021-03-10-13:52:58-CET 10/cifar10/fully-connected-cliques/all/2021-03-13-19:06:02-CET ../cifar10/clique-ring/all/2021-03-10-11:58:43-CET 1000/cifar10/clique-ring/all/2021-03-14-09:55:24-CET --labels '1 node IID bsz=2000' '10 nodes non-IID bsz=200' '100 nodes non-IID bsz=20' '1000 nodes non-IID bsz=2' --legend 'lower right' --yaxis test-accuracy --save-figure ../../figures/d-cliques-cifar10-scaling-clique-ring-cst-updates.png --add-min-max \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{figures/d-cliques-cifar10-scaling-clique-ring-cst-updates} \caption{Ring} \end{subfigure} \caption{\label{fig:d-cliques-cifar10-scaling-fully-connected} CIFAR10: D-Cliques scaling behavior (constant updates per epoch) for different inter-clique topologies.} \end{figure} %% % To regenerate the figure, from directory results/scaling %%% python ../../../learn-topology/tools/plot_convergence.py 10/mnist/fully-connected-cliques/all/2021-03-10-14:40:35-CET ../mnist/fully-connected-cliques/all/2021-03-10-10:19:44-CET 1000/mnist/fully-connected-cliques/all/2021-03-10-16:44:35-CET --labels '10 nodes bsz=128' '100 nodes bsz=128' '1000 nodes bsz=128 (45)' --legend 'lower right' --yaxis test-accuracy --save-figure ../../figures/d-cliques-mnist-scaling-fully-connected-cst-bsz.png --ymin 80 --add-min-max % \begin{figure}[htbp] % \centering % \includegraphics[width=0.48\textwidth]{figures/d-cliques-mnist-scaling-fully-connected-cst-bsz} % \caption{FCC: Constant Batch-Size} % \end{figure} \end{document}