diff --git a/exercises/exercise-3.md b/exercises/exercise-3.md
deleted file mode 100644
index 004e682927551c70f125c3e0c7e4db73a5acc84c..0000000000000000000000000000000000000000
--- a/exercises/exercise-3.md
+++ /dev/null
@@ -1,129 +0,0 @@
-# Exercise 3
-
-# Problem 1: Parallel Encoding
-
-In this exercise, your group will devise a parallel algorithm to encode sequences using the run-length encoding scheme. The encoding is very simple. It transforms sequences of letters such that all subsequences of the same letter are replaced by the letter and the sequence length. For instance:
-
-```
-"AAAAATTTGGGGTCCCAAC"  ⇒  "A5T3G4T1C3A2C1"
-```
-
-Your goal in this exercise is to come up with a parallel implementation of this algorithm. The function should have the following shape:
-
-```scala
-def rle(data: ParSeq[Char]): Buffer[(Char, Int)] =
-  data.aggregate(???)(???, ???)
-```
-
-The Buffer class is already given to you. A buffer of type `Buffer[A]` represents sequences of elements of type `A`. It supports the following methods, all of which are efficient:
-
-```scala
-def isEmpty: Boolean  // Checks if the buffer is empty.
-def head: A           // Returns the first element of the buffer.
-def tail: Buffer[A]   // Returns the buffer minus its first element.
-def last: A           // Returns the last element of the buffer.
-def init: Buffer[A]   // Returns the buffer minus its last element.
-def ++(that: Buffer[A]): Buffer[A] // Concatenate two buffers.
-def append(elem: A): Buffer[A] // Appends a single element to the right.
-
-Buffer.empty[A]: Buffer[A] // Returns an empty buffer.
-Buffer.singleton[A](element: A): Buffer[A] // Single element buffer.
-```
-
-# Problem 2: Parallel Two Phase Construction
-
-In this exercise, you will implement an array Combiner using internally a double linked list (DLL). Below is a minimal implementation of the `DLLCombiner` class and the related `Node` class. Your goal for this exercise is to complete the implementation of the (simplified) Combiner interface of the `DLLCombiner` class.
-
-```scala
-class DLLCombiner[A] extends Combiner[A, Array[A]] {
-  var head: Node[A] = null // null for empty lists.
-  var last: Node[A] = null // null for empty lists.
-  var size: Int = 0
-
-  // Implement these three methods...
-  override def +=(elem: A): Unit = ???
-  override def combine(that: DLLCombiner[A]): DLLCombiner[A] = ???
-  override def result(): Array[A] = ???
-}
-
-class Node[A](val value: A) {
-  var next: Node[A]     // null for last node.
-  var previous: Node[A] // null for first node.
-}
-```
-
-**Question 1:** What computational complexity do your methods have? Are the actual complexities of your methods acceptable according to the `Combiner` requirements?
-
-**Question 2:** One of the three methods you have implemented, `result`, should work in parallel according to the `Combiner` contract. Can you think of a way to implement this method efficiently using 2 parallel tasks?
-
-**Question 3:** Can you, given the current internal representation of your combiner, implement `result` so that it executes efficiently using 4 parallel tasks? If not, can you think of a way to make it possible?
-
-*Hint:* This is an open-ended question, there might be multiple solutions. In your solution, you may want to add extra information to the class Node and/or the class DLLCombiner.
-
-# Problem 3: Pipelines
-
-In this exercise, we look at pipelines of functions. A pipeline is simply a function which applies its argument successively to each function of a sequence. To illustrate this, consider the following pipeline of 4 functions:
-
-```scala
-val p: Int => Int = toPipeline(ParSeq(_ + 1, _ * 2, _ + 3, _ / 4))
-```
-
-The pipeline `p` is itself a function. Given a value `x`, the pipeline `p` will perform the following computations to process it. In the above example,
-
-```scala
-p(x) = (((x + 1)    Application of first function
-            * 2)    Application of second function
-            + 3)    Application of third function
-            / 4   Application of fourth function
-```
-
-In this exercise, we will investigate the possibility to process such pipelines in parallel.
-
-**Question 1:** Implement the following `toPipeline` function, which turns a parallel sequence of functions into a pipeline. You may use any of the parallel combinators available on `ParSeq`, such as the parallel `fold` or the parallel `reduce` methods.
-
-```scala
-def toPipeline(fs: ParSeq[A => A]): A => A = ???
-```
-
-*Hint:* Functions have a method called andThen, which implements function composition: it takes as argument another function and also returns a function. The returned function first applies the first function, and then applies the function passed as argument to that result. You may find it useful in your implementation of pipeline.
-
-**Question 2:** Given that your `toPipeline` function works in parallel, would the pipelines it returns also work in parallel? Would you expect pipelines returned by a sequential implementation of toPipeline to execute any slower? If so, why?
-
-Discuss those questions with your group and try to get a good understanding of what is happening.
-
-**Question 3:** Instead of arbitrary functions, we will now consider functions that are constant everywhere except on a finite domain. We represent such functions in the following way:
-
-```scala
-class FiniteFun[A](mappings: immutable.Map[A, A], default: A) {
-  def apply(x: A): A =
-    mappings.get(x) match {
-      case Some(y) => y
-      case None    => default
-    }
-
-  def andThen(that: FiniteFun[A]): FiniteFun[A] = ???
-}
-```
-
-Implement the andThen method. Can pipelines of such finite functions be efficiently constructed in parallel using the appropriately modified `toPipeline` method? Can the resulting pipelines be efficiently executed?
-
-**Question 4:** Compare the *work* and *depth* of the following two functions, assuming infinite parallelism. For which kind of input would the parallel version be asymptotically faster?
-
-```scala
-def applyAllSeq[A](x: A, fs: Seq[FiniteFun[A]]): A = {
-  // Applying each function sequentially.
-  var y = x
-  for (f <- fs)
-    y = f(y)
-  y
-}
-
-def applyAllPar[A](x: A, fs: ParSeq[FiniteFun[A]]): A =
-  if (fs.isEmpty) x
-  else {
-    // Computing the composition in parallel.
-    val p = fs.reduce(_ andThen _)
-    // Applying the pipeline.
-    p(x)
-  }
-```
diff --git a/exercises/solutions-1.md b/exercises/solutions-1.md
new file mode 100644
index 0000000000000000000000000000000000000000..69ee5c54823c42fdccbf74b0ccc7029273de3c0f
--- /dev/null
+++ b/exercises/solutions-1.md
@@ -0,0 +1,98 @@
+# Exercise 1 : Introduction to Concurrency
+
+## Question 1
+
+Yes
+
+## Question 2
+
+#### Property 1 holds
+
+Assuming that the execution of two concurrent threads only interleaves instructions and that reads and writes are executed atomically, it can be shown that property 1 always holds. Unfortunately, such strong guarantees are not offered by the Java Memory Model. If you are interested, have a look at the note below on the Java Memory Model.
+
+#### Violation of property 2
+
+Consider 2 threads that execute concurrently `transfer(from, to, amount)` with the exact same parameters. Assume that the account from has sufficient funds for at least one transfer.
+
+Thread 1 executes until it has computed the value balanceFrom - amount and then stops. Thread 2 then executes in its entirety the call to `transfer(from, to, amount)`. Then thread 1 resumes its execution and completes the call to `transfer`.
+
+At the end of this execution, the total amount of money held by the bank has changed. It has in fact increased by the value amount.
+
+#### Note on the Java Memory Model
+
+Assuming the Java Memory Model, both of the two properties can potentially be violated. Indeed, the model only ensures that the execution of each thread appears sequential to the thread itself, and not to any other concurrently running threads. Seemingly atomic instructions can be arbitrarily decomposed by the underlying virtual machine. Sequences of instructions can also be reordered at will by the VM, as long as the execution of a single thread appears as if it were executed sequentially. In these settings, both properties can be violated.
+
+## Question 3
+
+Variant 1
+
+In this variant, property 2 can be violated. It is not vulnerable to deadlocks.
+
+Variant 2
+
+In this variant, none of the two properties can be violated. However, it is susceptible to deadlocks.
+
+Variant 3
+
+In this last variant, none of the two properties can be violated and no deadlock can occur. It is however still not entirely satisfactory, since no two threads can execute transfers in parallel, even when the accounts are totally disjointed. Can you think of a better solution?
+
+## Question 1
+
+```scala
+def minMax(a: Array[Int]): (Int, Int) = {
+  val threshold = 10
+
+  def minMaxPar(from: Int, until: Int): (Int, Int) = {
+
+    if (until - from <= threshold) {
+      var i = from
+      var min = a(from)
+      var max = a(from)
+
+      while (i < until) {
+        val x = a(i)
+        if(x < min) min = x
+        if(x > max) max = x
+        i = i + 1
+      }
+
+      (min, max)
+    } else {
+      val mid = from + ((until - from) / 2)
+      val ((xMin, xMax),
+           (yMin, yMax)) = parallel(minMaxPar(from, mid),
+                                    minMaxPar(mid, until))
+      (min(xMin, yMin), max(xMax, yMax))
+    }
+  }
+
+  minMaxPar(0, a.size)
+}
+```
+
+## Question 2
+
+```scala
+def minMax(data: ParSeq[Int]): (Int, Int) = data.map({
+  (x: Int) => (x, x)
+}).reduce({
+  case ((mn1, mx1), (mn2, mx2)) => (min(mn1, mn2), max(mx1, mx2))
+})
+```
+
+Or:
+
+```scala
+def minMax(data: ParSeq[Int]): (Int, Int) =
+  (data.reduce(min), data.reduce(max))
+```
+
+## Question 3
+
+The function `f` must be associative. That is, for any `x`, `y`, `z`, it should be the case that:
+
+```
+f(x, f(y, z)) == f(f(x, y), z).
+```
+
+Both the `min` and `max` functions are associative. In addition, it can be easily shown that pairwise application of associative functions is also associative. From this follows that `f` is indeed associative.