In this exercise, your group will devise a parallel algorithm to encode sequences using the run-length encoding scheme. The encoding is very simple. It transforms sequences of letters such that all subsequences of the same letter are replaced by the letter and the sequence length. For instance:
In this exercise, your group will devise a parallel algorithm to encode sequences using the run-length encoding scheme. This encoding transforms sequences of letters such that all subsequences of the same letter are replaced by the letter and the sequence length. For instance:
```
```
"AAAAATTTGGGGTCCCAAC" ⇒ "A5T3G4T1C3A2C1"
"AAAAATTTGGGGTCCCAAC" ⇒ "A5T3G4T1C3A2C1"
...
@@ -35,7 +35,7 @@ Buffer.singleton[A](element: A): Buffer[A] // Single element buffer.
...
@@ -35,7 +35,7 @@ Buffer.singleton[A](element: A): Buffer[A] // Single element buffer.
In this exercise, you will implement an array Combiner using internally a double linked list (DLL). Below is a minimal implementation of the `DLLCombiner` class and the related `Node` class. Your goal for this exercise is to complete the implementation of the (simplified) Combiner interface of the `DLLCombiner` class.
In this exercise, you will implement an array Combiner using internally a double linked list (DLL). Below is a minimal implementation of the `DLLCombiner` class and the related `Node` class. Your goal for this exercise is to complete the implementation of the (simplified) Combiner interface of the `DLLCombiner` class.
```scala
```scala
classDLLCombiner[A]extendsCombiner[A, Array[A]]{
classDLLCombiner[A]extendsCombiner[A, Array[A]]:
varhead:Node[A]=null// null for empty lists.
varhead:Node[A]=null// null for empty lists.
varlast:Node[A]=null// null for empty lists.
varlast:Node[A]=null// null for empty lists.
varsize:Int=0
varsize:Int=0
...
@@ -44,12 +44,10 @@ class DLLCombiner[A] extends Combiner[A, Array[A]] {
...
@@ -44,12 +44,10 @@ class DLLCombiner[A] extends Combiner[A, Array[A]] {
**Question 1:** What computational complexity do your methods have? Are the actual complexities of your methods acceptable according to the `Combiner` requirements?
**Question 1:** What computational complexity do your methods have? Are the actual complexities of your methods acceptable according to the `Combiner` requirements?
...
@@ -74,7 +72,7 @@ The pipeline `p` is itself a function. Given a value `x`, the pipeline `p` will
...
@@ -74,7 +72,7 @@ The pipeline `p` is itself a function. Given a value `x`, the pipeline `p` will
p(x)=(((x+1)Applicationoffirstfunction
p(x)=(((x+1)Applicationoffirstfunction
*2)Applicationofsecondfunction
*2)Applicationofsecondfunction
+3)Applicationofthirdfunction
+3)Applicationofthirdfunction
/4Applicationoffourthfunction
/4Applicationoffourthfunction
```
```
In this exercise, we will investigate the possibility to process such pipelines in parallel.
In this exercise, we will investigate the possibility to process such pipelines in parallel.
...
@@ -94,15 +92,13 @@ Discuss those questions with your group and try to get a good understanding of w
...
@@ -94,15 +92,13 @@ Discuss those questions with your group and try to get a good understanding of w
**Question 3:** Instead of arbitrary functions, we will now consider functions that are constant everywhere except on a finite domain. We represent such functions in the following way:
**Question 3:** Instead of arbitrary functions, we will now consider functions that are constant everywhere except on a finite domain. We represent such functions in the following way:
Implement the andThen method. Can pipelines of such finite functions be efficiently constructed in parallel using the appropriately modified `toPipeline` method? Can the resulting pipelines be efficiently executed?
Implement the andThen method. Can pipelines of such finite functions be efficiently constructed in parallel using the appropriately modified `toPipeline` method? Can the resulting pipelines be efficiently executed?
...
@@ -110,20 +106,18 @@ Implement the andThen method. Can pipelines of such finite functions be efficien
...
@@ -110,20 +106,18 @@ Implement the andThen method. Can pipelines of such finite functions be efficien
**Question 4:** Compare the *work* and *depth* of the following two functions, assuming infinite parallelism. For which kind of input would the parallel version be asymptotically faster?
**Question 4:** Compare the *work* and *depth* of the following two functions, assuming infinite parallelism. For which kind of input would the parallel version be asymptotically faster?