diff --git a/units/en/unitbonus1/how-huggy-works.mdx b/units/en/unitbonus1/how-huggy-works.mdx
index 8ff0660..3887a7e 100644
--- a/units/en/unitbonus1/how-huggy-works.mdx
+++ b/units/en/unitbonus1/how-huggy-works.mdx
@@ -1,30 +1,30 @@
# How Huggy works? [[how-huggy-works]]
Huggy is a Deep Reinforcement Learning environment made by Hugging Face and based on [Puppo the Corgi, a project by the Unity MLAgents team](https://blog.unity.com/technology/puppo-the-corgi-cuteness-overload-with-the-unity-ml-agents-toolkit).
-This environment was created using the [Unity game engine](https://unity.com/) and [MLAgents](https://github.com/Unity-Technologies/ml-agents). ML-Agents is a toolkit for the game engine Unity that allows us to **create environments using Unity or use pre-made environments to train our agents**.
+This environment was created using the [Unity game engine](https://unity.com/) and [MLAgents](https://github.com/Unity-Technologies/ml-agents). ML-Agents is a toolkit for the game engine from Unity that allows us to **create environments using Unity or use pre-made environments to train our agents**.
-So, in this environment, we aim to train Huggy to **fetch the stick we throw at him. It means he needs to move correctly toward the stick**.
+In this environment we aim to train Huggy to **fetch the stick we throw at him. It means he needs to move correctly toward the stick**.
## The State Space: what Huggy "perceives." [[state-space]]
Huggy doesn't "see" his environment. Instead, we provide him information about the environment:
-The target (stick) position
-The relative position between himself and the target
-The orientation of his legs.
-Given all this information, Huggy can decide which action to take next to fulfill his goal.
+* The target (stick) position
+* The relative position between himself and the target
+* The orientation of his legs.
+Given all this information, Huggy can use his policy to determine which action to take next to fulfill his goal.
-## The Action Space: what moves Huggy can do [[action-space]]
+## The Action Space: what moves Huggy can perform [[action-space]]
-**Joint motors drive huggy legs**. It means that to get the target, Huggy needs to **learn to rotate the joint motors of each of his legs correctly so he can move**.
+**Joint motors drive Huggy legs**. It means that to get the target, Huggy needs to **learn to rotate the joint motors of each of his legs correctly so he can move**.
## The Reward Function [[reward-function]]
-The reward function is designed so that **Huggy will fulfill his goal** : fetch the stick.
+The reward function is designed so that **Huggy will fulfill his goal**: fetch the stick.
Remember that one of the foundations of Reinforcement Learning is the *reward hypothesis*: a goal can be described as the **maximization of the expected cumulative reward**.
@@ -43,7 +43,7 @@ If you want to see what this reward function looks like mathematically, check [P
## Train Huggy
-Huggy aims **to learn to run correctly and as fast as possible toward the goal**. To do that, he needs at every step, given the observation he gets to decide how to rotate each joint motor of his legs to move correctly (not spinning too much) and towards the goal.
+Huggy aims **to learn to run correctly and as fast as possible toward the goal**. To do that, at every step and given the environment observation, he needs to decide how to rotate each joint motor of his legs to move correctly (not spinning too much) and towards the goal.
The training loop looks like this:
@@ -61,6 +61,6 @@ We built **multiple copies of the environment for the training**. This helps spe
Now that you have the big picture of the environment, you're ready to train Huggy to fetch the stick.
-To do that, we're going to use [MLAgents](https://github.com/Unity-Technologies/ml-agents). Don't worry if you have never used it before. We're going on this unit to train on a Google Colab notebook and then you'll be able to load your trained Huggy and play with him directly in the browser.
+To do that, we're going to use [MLAgents](https://github.com/Unity-Technologies/ml-agents). Don't worry if you have never used it before. In this unit we'll use Google Colab to train Huggy, and then you'll be able to load your trained Huggy and play with him directly in the browser.
In a future unit, we will study more in-depth MLAgents and how it works. But for now, we keep things simple by just using the provided implementation.
diff --git a/units/en/unitbonus1/introduction.mdx b/units/en/unitbonus1/introduction.mdx
index ff74e3c..68a57f9 100644
--- a/units/en/unitbonus1/introduction.mdx
+++ b/units/en/unitbonus1/introduction.mdx
@@ -1,6 +1,6 @@
# Introduction [[introduction]]
-In this bonus unit, we'll reinforce what we learn in the first unit by teaching Huggy the Dog to fetch the stick and then [play with him directly in your browser](https://huggingface.co/spaces/ThomasSimonini/Huggy).
+In this bonus unit, we'll reinforce what we learned in the first unit by teaching Huggy the Dog to fetch the stick and then [play with him directly in your browser](https://huggingface.co/spaces/ThomasSimonini/Huggy) 🐶