diff --git a/units/en/unit1/rl-framework.mdx b/units/en/unit1/rl-framework.mdx
index 1af2291..fbba374 100644
--- a/units/en/unit1/rl-framework.mdx
+++ b/units/en/unit1/rl-framework.mdx
@@ -61,6 +61,8 @@ In a chess game, we have access to the whole board information, so we receive a
In Super Mario Bros, we only see the part of the level close to the player, so we receive an observation.
+In Super Mario Bros, we only see the part of the level close to the player, so we receive an observation.
+
In Super Mario Bros, we are in a partially observed environment. We receive an observation **since we only see a part of the level.**
@@ -85,6 +87,8 @@ The actions can come from a *discrete* or *continuous space*:
+Again, in Super Mario Bros, we have a finite set of actions since we have only 4 directions.
+
- *Continuous space*: the number of possible actions is **infinite**.
diff --git a/units/en/unit1/two-methods.mdx b/units/en/unit1/two-methods.mdx
index 34ddab8..fcfc04a 100644
--- a/units/en/unit1/two-methods.mdx
+++ b/units/en/unit1/two-methods.mdx
@@ -82,6 +82,8 @@ Here we see that our value function **defined values for each possible state.**
Thanks to our value function, at each step our policy will select the state with the biggest value defined by the value function: -7, then -6, then -5 (and so on) to attain the goal.
+Thanks to our value function, at each step our policy will select the state with the biggest value defined by the value function: -7, then -6, then -5 (and so on) to attain the goal.
+
If we recap: