Newer
Older
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
\renewcommand{\FIGREP}{src/mpi/figures}
\section{Use of MPI}
\label{sec:mpi}
\begin{frame}
\frametitle{MPI}
\framesubtitle{Distributed and shared memory}
\begin{itemize}
\item Shared Memory:
\begin{description}
\item In a shared memory model all data (memory) are visible to all threads (workers) in your application
\item This means that they can get required data easily but also means that one has to be careful that when updating memory only one thread is writing to a particular address
\end{description}
\item Distributed Memory:
\begin{description}
\item Here the workers only see a small part of the overall memory and if they need data that are visible to another worker they have to explicitly ask for it
\item The advantage is that, if the algorithm scales, we can use more nodes to increase the performance and/or problem size
\item All the largest HPC systems use such a distributed memory model.
\end{description}
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{MPI}
\framesubtitle{MPI implementations}
MPI is a standard with many implementations
\begin{itemize}
\item MPICH2
\item MVAPICH2
\item Intel MPI
\item OpenMPI
\item Platform MPI
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{MPI}
\framesubtitle{MPI code compiler}
\cmd{mpicc/mpif90/...} is really a wrapper for the standard (GCC and Intel) compilers
\item \cmd{mpicc} - generic MPI C compiler
\item \cmd{mpiicc} - Intel MPI C compiler
\item \cmd{mpicxx} - generic MPI C++ compiler
\item \cmd{mpiifort} - Intel MPI Fortran compiler
\begin{frame}
\frametitle{MPI}
\framesubtitle{Exercise compilationMPI}
\begin{itemize}
\item Go to the directory \cmd{compilationMPI}
\item Compile and execute the code using srun and multiple MPI processes
\end{itemize}
\end{frame}
%%% Local Variables:
%%% mode: latex
%%% TeX-master: "../../SCM_slides"
%%% End: