diff --git a/units/en/unit1/hands-on.mdx b/units/en/unit1/hands-on.mdx
index a7a10f6..08b1e8a 100644
--- a/units/en/unit1/hands-on.mdx
+++ b/units/en/unit1/hands-on.mdx
@@ -36,7 +36,7 @@ So let's get started! ๐
We strongly **recommend students use Google Colab for the hands-on exercises** instead of running them on their personal computers.
-By using Google Colab, **you can focus on learning and experimenting without worrying about the technical aspects** of setting up your environments.
+By using Google Colab, **you can focus on learning and experimenting without worrying about the technical aspects** of setting up your environments.*
# Unit 1: Train your first Deep Reinforcement Learning Agent ๐ค
@@ -44,9 +44,10 @@ By using Google Colab, **you can focus on learning and experimenting without wor
In this notebook, you'll train your **first Deep Reinforcement Learning agent** a Lunar Lander agent that will learn to **land correctly on the Moon ๐**. Using [Stable-Baselines3](https://stable-baselines3.readthedocs.io/en/master/) a Deep Reinforcement Learning library, share them with the community, and experiment with different configurations
+
### The environment ๐ฎ
-- [LunarLander-v2](https://www.gymlibrary.dev/environments/box2d/lunar_lander/)
+- [LunarLander-v2](https://gymnasium.farama.org/environments/box2d/lunar_lander/)
### The library used ๐
@@ -58,40 +59,44 @@ We're constantly trying to improve our tutorials, so **if you find some issues i
At the end of the notebook, you will:
-- Be able to use **Gym**, the environment library.
+- Be able to use **Gymnasium**, the environment library.
- Be able to use **Stable-Baselines3**, the deep reinforcement learning library.
- Be able to **push your trained agent to the Hub** with a nice video replay and an evaluation score ๐ฅ.
+## This notebook is from Deep Reinforcement Learning Course
-
-## This hands-on is from Deep Reinforcement Learning Course
In this free course, you will:
- ๐ Study Deep Reinforcement Learning in **theory and practice**.
- ๐งโ๐ป Learn to **use famous Deep RL libraries** such as Stable Baselines3, RL Baselines3 Zoo, CleanRL and Sample Factory 2.0.
-- ๐ค Train **agents in unique environments**
+- ๐ค Train **agents in unique environments**
+- ๐ **Earn a certificate of completion** by completing 80% of the assignments.
-And more: check ๐ the syllabus ๐ https://simoninithomas.github.io/deep-rl-course
+And more!
+
+Check ๐ the syllabus ๐ https://simoninithomas.github.io/deep-rl-course
Donโt forget to **sign up to the course** (we are collecting your email to be able toย **send you the links when each Unit is published and give you information about the challenges and updates).**
-
-The best way to keep in touch and ask questions is to join our discord server to exchange with the community and with us ๐๐ป https://discord.gg/ydHrjt3WP5
+The best way to keep in touch and ask questions is **to join our discord server** to exchange with the community and with us ๐๐ป https://discord.gg/ydHrjt3WP5
## Prerequisites ๐๏ธ
+
Before diving into the notebook, you need to:
-๐ฒ ๐ **Read Unit 0** that gives you all the **information about the course and helps you to onboard** ๐ค
+๐ฒ ๐ **[Read Unit 0](https://huggingface.co/deep-rl-course/unit0/introduction)** that gives you all the **information about the course and helps you to onboard** ๐ค
-๐ฒ ๐ **Develop an understanding of the foundations of Reinforcement learning** by reading Unit 1
+๐ฒ ๐ **Develop an understanding of the foundations of Reinforcement learning** (MC, TD, Rewards hypothesis...) by [reading Unit 1](https://huggingface.co/deep-rl-course/unit1/introduction).
## A small recap of Deep Reinforcement Learning ๐
+
Let's do a small recap on what we learned in the first Unit:
+
- Reinforcement Learning is a **computational approach to learning from actions**. We build an agent that learns from the environment by **interacting with it through trial and error** and receiving rewards (negative or positive) as feedback.
- The goal of any RL agent is to **maximize its expected cumulative reward** (also called expected return) because RL is based on the _reward hypothesis_, which is that all goals can be described as the maximization of an expected cumulative reward.
@@ -102,8 +107,8 @@ Let's do a small recap on what we learned in the first Unit:
- To solve an RL problem, you want to **find an optimal policy**; the policy is the "brain" of your AI that will tell us what action to take given a state. The optimal one is the one that gives you the actions that max the expected return.
-
There are **two** ways to find your optimal policy:
+
- By **training your policy directly**: policy-based methods.
- By **training a value function** that tells us the expected return the agent will get at each state and use this function to define our policy: value-based methods.
@@ -111,8 +116,16 @@ There are **two** ways to find your optimal policy:
# Let's train our first Deep Reinforcement Learning agent and upload it to the Hub ๐
+## Get a certificate ๐
+
+To validate this hands-on for the [certification process](https://huggingface.co/deep-rl-course/en/unit0/introduction#certification-process), you need to push your trained model to the Hub and **get a result of >= 200**.
+
+To find your result, go to the [leaderboard](https://huggingface.co/spaces/huggingface-projects/Deep-Reinforcement-Learning-Leaderboard) and find your model, **the result = mean_reward - std of reward**
+
+For more information about the certification process, check this section ๐ https://huggingface.co/deep-rl-course/en/unit0/introduction#certification-process
## Set the GPU ๐ช
+
- To **accelerate the agent's training, we'll use a GPU**. To do that, go to `Runtime > Change Runtime type`
@@ -122,43 +135,43 @@ There are **two** ways to find your optimal policy:
## Install dependencies and create a virtual screen ๐ฝ
+
The first step is to install the dependencies, weโll install multiple ones.
-- `gym[box2D]`: Contains the LunarLander-v2 environment ๐ (we use `gym==0.21`)
+- `gymnasium[box2d]`: Contains the LunarLander-v2 environment ๐
- `stable-baselines3[extra]`: The deep reinforcement learning library.
- `huggingface_sb3`: Additional code for Stable-baselines3 to load and upload models from the Hugging Face ๐ค Hub.
To make things easier, we created a script to install all these dependencies.
-```python
-!apt install swig cmake
+```bash
+apt install swig cmake
```
-```python
-!pip install -r https://raw.githubusercontent.com/huggingface/deep-rl-class/main/notebooks/unit1/requirements-unit1.txt
+```bash
+pip install -r https://raw.githubusercontent.com/huggingface/deep-rl-class/main/notebooks/unit1/requirements-unit1.txt
```
-During the notebook, we'll need to generate a replay video. To do so, with colab, **we need to have a virtual screen to be able to render the environment** (and thus record the frames).
+During the notebook, we'll need to generate a replay video. To do so, with colab, **we need to have a virtual screen to be able to render the environment** (and thus record the frames).
Hence the following cell will install virtual screen libraries and create and run a virtual screen ๐ฅ
-```python
-!sudo apt-get update
-!apt install python-opengl
-!apt install ffmpeg
-!apt install xvfb
-!pip3 install pyvirtualdisplay
+```bash
+sudo apt-get update
+apt install python-opengl
+apt install ffmpeg
+apt install xvfb
+pip3 install pyvirtualdisplay
```
To make sure the new installed libraries are used, **sometimes it's required to restart the notebook runtime**. The next cell will force the **runtime to crash, so you'll need to connect again and run the code starting from here**. Thanks to this trick, **we will be able to run our virtual screen.**
-
```python
import os
+
os.kill(os.getpid(), 9)
```
-
```python
# Virtual display
from pyvirtualdisplay import Display
@@ -174,14 +187,14 @@ One additional library we import is huggingface_hub **to be able to upload and d
The Hugging Face Hub ๐ค works as a central place where anyone can share and explore models and datasets. It has versioning, metrics, visualizations and other features that will allow you to easily collaborate with others.
-You can see all the Deep reinforcement Learning models available here ๐ https://huggingface.co/models?pipeline_tag=reinforcement-learning&sort=downloads
+You can see here all the Deep reinforcement Learning models available here๐ https://huggingface.co/models?pipeline_tag=reinforcement-learning&sort=downloads
```python
-import gym
+import gymnasium
-from huggingface_sb3 import load_from_hub, package_to_hub, push_to_hub
+from huggingface_sb3 import load_from_hub, package_to_hub
from huggingface_hub import (
notebook_login,
) # To log to our Hugging Face account to be able to upload models to the Hub.
@@ -191,12 +204,15 @@ from stable_baselines3.common.evaluation import evaluate_policy
from stable_baselines3.common.env_util import make_vec_env
```
-## Understand Gym and how it works ๐ค
+## Understand Gymnasium and how it works ๐ค
-๐ The library containing our environment is called Gym.
-**You'll use Gym a lot in Deep Reinforcement Learning.**
+๐ The library containing our environment is called Gymnasium.
+**You'll use Gymnasium a lot in Deep Reinforcement Learning.**
+
+Gymnasium is the **new version of Gym library** [maintained by the Farama Foundation](https://farama.org/).
+
+The Gymnasium library provides two things:
-The Gym library provides two things:
- An interface that allows you to **create RL environments**.
- A **collection of environments** (gym-control, atari, box2D...).
@@ -211,9 +227,9 @@ At each step:
- The environment gives someย **reward (R1)**ย to the Agent โ weโre not deadย *(Positive Reward +1)*.
-With Gym:
+With Gymnasium:
-1๏ธโฃ We create our environment using `gym.make()`
+1๏ธโฃ We create our environment using `gymnasium.make()`
2๏ธโฃ We reset the environment to its initial state with `observation = env.reset()`
@@ -224,23 +240,26 @@ At each step:
4๏ธโฃ Using `env.step(action)`, we perform this action in the environment and get
- `observation`: The new state (st+1)
- `reward`: The reward we get after executing the action
-- `done`: Indicates if the episode terminated
+- `terminated`: Indicates if the episode terminated (agent reach the terminal state)
+- `truncated`: Introduced with this new version, it indicates a timelimit or if an agent go out of bounds of the environment for instance.
- `info`: A dictionary that provides additional information (depends on the environment).
-If the episode is done:
+For more explanations check this ๐ https://gymnasium.farama.org/api/env/#gymnasium.Env.step
+
+If the episode is terminated:
- We reset the environment to its initial state with `observation = env.reset()`
**Let's look at an example!** Make sure to read the code
```python
-import gym
+import gymnasium as gym
# First, we create our environment called LunarLander-v2
env = gym.make("LunarLander-v2")
# Then we reset this environment
-observation = env.reset()
+observation, info = env.reset()
for _ in range(20):
# Take a random action
@@ -248,29 +267,30 @@ for _ in range(20):
print("Action taken:", action)
# Do this action in the environment and get
- # next_state, reward, done and info
- observation, reward, done, info = env.step(action)
+ # next_state, reward, terminated, truncated and info
+ observation, reward, terminated, truncated, info = env.step(action)
- # If the game is done (in our case we land, crashed or timeout)
- if done:
+ # If the game is terminated (in our case we land, crashed) or truncated (timeout)
+ if terminated or truncated:
# Reset the environment
print("Environment is reset")
- observation = env.reset()
+ observation, info = env.reset()
+
+env.close()
```
## Create the LunarLander environment ๐ and understand how it works
-### The environment ๐ฎ
-
-In this first tutorial, weโre going to train our agent, a [Lunar Lander](https://www.gymlibrary.dev/environments/box2d/lunar_lander/), **to land correctly on the moon**. To do that, the agent needs to learn **to adapt its speed and position (horizontal, vertical, and angular) to land correctly.**
+### [The environment ๐ฎ](https://gymnasium.farama.org/environments/box2d/lunar_lander/)
+In this first tutorial, weโre going to train our agent, a [Lunar Lander](https://gymnasium.farama.org/environments/box2d/lunar_lander/), **to land correctly on the moon**. To do that, the agent needs to learn **to adapt its speed and position (horizontal, vertical, and angular) to land correctly.**
---
-๐ก A good habit when you start to use an environment is to check its documentation
+๐ก A good habit when you start to use an environment is to check its documentation
-๐ https://www.gymlibrary.dev/environments/box2d/lunar_lander/
+๐ https://gymnasium.farama.org/environments/box2d/lunar_lander/
---
@@ -304,19 +324,29 @@ print("Action Space Shape", env.action_space.n)
print("Action Space Sample", env.action_space.sample()) # Take a random action
```
-The action space (the set of possible actions the agent can take) is discrete with 4 actions available ๐ฎ:
+The action space (the set of possible actions the agent can take) is discrete with 4 actions available ๐ฎ:
-- Do nothing,
-- Fire left orientation engine,
-- Fire the main engine,
-- Fire right orientation engine.
+- Action 0: Do nothing,
+- Action 1: Fire left orientation engine,
+- Action 2: Fire the main engine,
+- Action 3: Fire right orientation engine.
Reward function (the function that will gives a reward at each timestep) ๐ฐ:
-- Moving from the top of the screen to the landing pad and zero speed is about 100~140 points.
-- Firing main engine is -0.3 each frame
-- Each leg making ground contact is +10 points
-- Episode finishes if the lander crashes (additional - 100 points) or comes to rest (+100 points)
+After every step a reward is granted. The total reward of an episode is the **sum of the rewards for all the steps within that episode**.
+
+For each step, the reward:
+
+- Is increased/decreased the closer/further the lander is to the landing pad.
+- Is increased/decreased the slower/faster the lander is moving.
+- Is decreased the more the lander is tilted (angle not horizontal).
+- Is increased by 10 points for each leg that is in contact with the ground.
+- Is decreased by 0.03 points each frame a side engine is firing.
+- Is decreased by 0.3 points each frame the main engine is firing.
+
+The episode receive an **additional reward of -100 or +100 points for crashing or landing safely respectively.**
+
+An episode is **considered a solution if it scores at least 200 points.**
#### Vectorized Environment
@@ -341,16 +371,14 @@ env = make_vec_env("LunarLander-v2", n_envs=16)
----
-
+
-To solve this problem, we're going to use SB3 **PPO**. [PPO (aka Proximal Policy Optimization) is one of the the SOTA (state of the art) Deep Reinforcement Learning algorithms that you'll study during this course](https://stable-baselines3.readthedocs.io/en/master/modules/ppo.html#example%5D).
+To solve this problem, we're going to use SB3 **PPO**. [PPO (aka Proximal Policy Optimization) is one of the SOTA (state of the art) Deep Reinforcement Learning algorithms that you'll study during this course](https://stable-baselines3.readthedocs.io/en/master/modules/ppo.html#example%5D).
PPO is a combination of:
-
- *Value-based reinforcement learning method*: learning an action-value function that will tell us the **most valuable action to take given a state and action**.
- *Policy-based reinforcement learning method*: learning a policy that will **give us a probability distribution over actions**.
-
Stable-Baselines3 is easy to set up:
1๏ธโฃ You **create your environment** (in our case it was done above)
@@ -361,12 +389,10 @@ Stable-Baselines3 is easy to set up:
```
# Create environment
-
env = gym.make('LunarLander-v2')
# Instantiate the agent
model = PPO('MlpPolicy', env, verbose=1)
-
# Train the agent
model.learn(total_timesteps=int(2e5))
```
@@ -399,7 +425,6 @@ model = PPO(
```
## Train the PPO agent ๐
-
- Let's train our agent for 1,000,000 timesteps, don't forget to use GPU on Colab. It will take approximately ~20min, but you can use fewer timesteps if you just want to try it out.
- During the training, take a โ break you deserved it ๐ค
@@ -437,7 +462,7 @@ model.save(model_name)
eval_env =
# Evaluate the model with 10 evaluation episodes and deterministic=True
-mean_reward, std_reward =
+mean_reward, std_reward =
# Print the results
```
@@ -451,10 +476,10 @@ mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, d
print(f"mean_reward={mean_reward:.2f} +/- {std_reward}")
```
-- In my case, I got a mean reward of `200.20 +/- 20.80` after training for 1 million steps, which means that our lunar lander agent is ready to land on the moon ๐๐ฅณ.
+- In my case, I got a mean reward is `200.20 +/- 20.80` after training for 1 million steps, which means that our lunar lander agent is ready to land on the moon ๐๐ฅณ.
## Publish our trained model on the Hub ๐ฅ
-Now that we saw we got good results after the training, we can publish our trained model on the Hub ๐ค with one line of code.
+Now that we saw we got good results after the training, we can publish our trained model on the hub ๐ค with one line of code.
๐ The libraries documentation ๐ https://github.com/huggingface/huggingface_sb3/tree/main#hugging-face--x-stable-baselines3-v20
@@ -471,14 +496,14 @@ This way:
To be able to share your model with the community there are three more steps to follow:
-1๏ธโฃ (If it's not already done) create an account on HF โก https://huggingface.co/join
+1๏ธโฃ (If it's not already done) create an account on Hugging Face โก https://huggingface.co/join
-2๏ธโฃ Sign in and then you need to store your authentication token from the Hugging Face website.
+2๏ธโฃ Sign in and then, you need to store your authentication token from the Hugging Face website.
- Create a new token (https://huggingface.co/settings/tokens) **with write role**
-- Copy the token
+- Copy the token
- Run the cell below and paste the token
```python
@@ -498,12 +523,12 @@ Let's fill the `package_to_hub` function:
- `eval_env`: the evaluation environment defined in eval_env
- `repo_id`: the name of the Hugging Face Hub Repository that will be created/updated `(repo_id = {username}/{repo_name})`
-๐ก **A good name is `{username}/{model_architecture}-{env_id}` **
+๐ก **A good name is {username}/{model_architecture}-{env_id}**
- `commit_message`: message of the commit
```python
-import gym
+import gymnasium as gym
from stable_baselines3.common.vec_env import DummyVecEnv
from stable_baselines3.common.env_util import make_vec_env
@@ -511,13 +536,13 @@ from huggingface_sb3 import package_to_hub
## TODO: Define a repo_id
## repo_id is the id of the model repository from the Hugging Face Hub (repo_id = {organization}/{repo_name} for instance ThomasSimonini/ppo-LunarLander-v2
-repo_id =
+repo_id =
# TODO: Define the name of the environment
-env_id =
+env_id =
-# Create the evaluation env
-eval_env = DummyVecEnv([lambda: gym.make(env_id)])
+# Create the evaluation env and set the render_mode="rgb_array"
+eval_env = DummyVecEnv([lambda: gym.make(env_id, render_mode="rgb_array")])
# TODO: Define the model architecture we used
@@ -528,23 +553,19 @@ commit_message = ""
# method save, evaluate, generate a model card and record a replay video of your agent before pushing the repo to the hub
package_to_hub(model=model, # Our trained model
- model_name=model_name, # The name of our trained model
+ model_name=model_name, # The name of our trained model
model_architecture=model_architecture, # The model architecture we used: in our case PPO
env_id=env_id, # Name of the environment
eval_env=eval_env, # Evaluation Environment
repo_id=repo_id, # id of the model repository from the Hugging Face Hub (repo_id = {organization}/{repo_name} for instance ThomasSimonini/ppo-LunarLander-v2
commit_message=commit_message)
-
-# Note: if after running the package_to_hub function and it gives an issue of rebasing, please run the following code
-# cd && git add . && git commit -m "Add message" && git pull
-# And don't forget to do a "git push" at the end to push the change to the hub.
```
#### Solution
```python
-import gym
+import gymnasium as gym
from stable_baselines3 import PPO
from stable_baselines3.common.vec_env import DummyVecEnv
@@ -567,8 +588,8 @@ repo_id = "ThomasSimonini/ppo-LunarLander-v2" # Change with your repo id, you c
## Define the commit message
commit_message = "Upload PPO LunarLander-v2 trained agent"
-# Create the evaluation env
-eval_env = DummyVecEnv([lambda: gym.make(env_id)])
+# Create the evaluation env and set the render_mode="rgb_array"
+eval_env = DummyVecEnv([lambda: gym.make(env_id, render_mode="rgb_array")])
# PLACE the package_to_hub function you've just filled here
package_to_hub(
@@ -583,10 +604,10 @@ package_to_hub(
```
Congrats ๐ฅณ you've just trained and uploaded your first Deep Reinforcement Learning agent. The script above should have displayed a link to a model repository such as https://huggingface.co/osanseviero/test_sb3. When you go to this link, you can:
-* see a video preview of your agent at the right.
-* click "Files and versions" to see all the files in the repository.
-* click "Use in stable-baselines3" to get a code snippet that shows how to load the model.
-* a model card (`README.md` file) which gives a description of the model
+* See a video preview of your agent at the right.
+* Click "Files and versions" to see all the files in the repository.
+* Click "Use in stable-baselines3" to get a code snippet that shows how to load the model.
+* A model card (`README.md` file) which gives a description of the model
Under the hood, the Hub uses git-based repositories (don't worry if you don't know what git is), which means you can update the model with new versions as you experiment and improve your agent.
@@ -595,7 +616,7 @@ Compare the results of your LunarLander-v2 with your classmates using the leader
## Load a saved LunarLander model from the Hub ๐ค
Thanks to [ironbar](https://github.com/ironbar) for the contribution.
-Loading a saved model from the Hub is really easy.
+Loading a saved model from the Hub is really easy.
You go to https://huggingface.co/models?library=stable-baselines3 to see the list of all the Stable-baselines3 saved models.
1. You select one and copy its repo_id
@@ -606,6 +627,14 @@ You go to https://huggingface.co/models?library=stable-baselines3 to see the lis
- The repo_id
- The filename: the saved model inside the repo and its extension (*.zip)
+Because the model I download from the Hub was trained with Gym (the former version of Gymnasium) we need to install shimmy a API conversion tool that will help us to run the environment correctly.
+
+Shimmy Documentation: https://github.com/Farama-Foundation/Shimmy
+
+```python
+!pip install shimmy
+```
+
```python
from huggingface_sb3 import load_from_hub
@@ -637,7 +666,7 @@ print(f"mean_reward={mean_reward:.2f} +/- {std_reward}")
```
## Some additional challenges ๐
-The best way to learn **is to try things by your own**! As you saw, the current agent is not doing great. As a first suggestion, you can train for more steps. With 1,000,000 steps, we saw some great results!
+The best way to learn **is to try things by your own**! As you saw, the current agent is not doing great. As a first suggestion, you can train for more steps. With 1,000,000 steps, we saw some great results!
In the [Leaderboard](https://huggingface.co/spaces/huggingface-projects/Deep-Reinforcement-Learning-Leaderboard) you will find your agents. Can you get to the top?
@@ -660,8 +689,9 @@ Take time to really **grasp the material before continuing and try the additiona
Naturally, during the course, weโre going to dive deeper into these concepts but **itโs better to have a good understanding of them now before diving into the next chapters.**
+
Next time, in the bonus unit 1, you'll train Huggy the Dog to fetch the stick.
-## Keep learning, stay awesome ๐ค
+## Keep learning, stay awesome ๐ค
\ No newline at end of file