From 7cb38ecd453af362f85ccc52f5ddca82ed50e988 Mon Sep 17 00:00:00 2001
From: Samuel Chassot <samuel.chassot@epfl.ch>
Date: Fri, 28 Feb 2025 17:03:02 +0100
Subject: [PATCH] remove dead doc

---
 info/labs/lab02/lab02-README.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/info/labs/lab02/lab02-README.md b/info/labs/lab02/lab02-README.md
index 7cbc6fb..b8d968f 100644
--- a/info/labs/lab02/lab02-README.md
+++ b/info/labs/lab02/lab02-README.md
@@ -75,7 +75,7 @@ The `Lexer` has the following components:
 * The public method is `run`. It just calls `lexer.spawn`(`source`) for every input file and concatenates the results.
 * `lexer` is the Silex-based definition of tokenization rules. Each rule corresponds to a regular expression matching a prefix of the remaining program input. Silex will compose all of these rules into one finite state machine and apply the maximum-munch or longest match rule you've seen in class.
 * Whenever a rule is found to match a (maximal) prefix of the remaining input, Scallion will call the transformation function provided using the `|>` operator in the rule. This function is given the matched input characters (`cs`) along with positional information (`range`) and should then produce an instance of `Token`. You can find its definition in `Tokens.scala`, which includes a list of all the different kinds of tokens that your Amy compiler should process. For instance, KeywordToken(`if`) represents an occurrence of the reserved word `if` in a program.
-For more details on how to write new rules, read the short introduction to Silex's API at the top of `Lexer.scala` or consider the examples on the Scallion website. You can also refer to [Silex's Scaladoc page](https://epfl-lara.github.io/silex/).
+For more details on how to write new rules, read the short introduction to Silex's API at the top of `Lexer.scala` or consider the examples on the Scallion website.
 
 Your task is to complete the rules in `Lexer.scala` and implement the filtering of irrelevant tokens.
 
-- 
GitLab