diff --git a/notebooks/unit6/requirements-unit6.txt b/notebooks/unit6/requirements-unit6.txt
index 1c8ffaa..b92d196 100644
--- a/notebooks/unit6/requirements-unit6.txt
+++ b/notebooks/unit6/requirements-unit6.txt
@@ -1,4 +1,3 @@
-stable-baselines3[extra]
+stable-baselines3==2.0.0a4
huggingface_sb3
-panda_gym==2.0.0
-pyglet==1.5.1
+panda-gym
\ No newline at end of file
diff --git a/units/en/unit6/hands-on.mdx b/units/en/unit6/hands-on.mdx
index 9d34e59..4938ca1 100644
--- a/units/en/unit6/hands-on.mdx
+++ b/units/en/unit6/hands-on.mdx
@@ -8,14 +8,10 @@
askForHelpUrl="http://hf.co/join/discord" />
-Now that you've studied the theory behind Advantage Actor Critic (A2C), **you're ready to train your A2C agent** using Stable-Baselines3 in robotic environments. And train two robots:
-
-- A spider 🕷️ to learn to move.
+Now that you've studied the theory behind Advantage Actor Critic (A2C), **you're ready to train your A2C agent** using Stable-Baselines3 in a robotic environment. And train a:
- A robotic arm 🦾 to move to the correct position.
-We're going to use two Robotics environments:
-
-- [PyBullet](https://github.com/bulletphysics/bullet3)
+We're going to use
- [panda-gym](https://github.com/qgallouedec/panda-gym)
@@ -23,444 +19,13 @@ We're going to use two Robotics environments:
To validate this hands-on for the certification process, you need to push your two trained models to the Hub and get the following results:
-- `AntBulletEnv-v0` get a result of >= 650.
-- `PandaReachDense-v2` get a result of >= -3.5.
+- `PandaReachDense-v3` get a result of >= -3.5.
To find your result, [go to the leaderboard](https://huggingface.co/spaces/huggingface-projects/Deep-Reinforcement-Learning-Leaderboard) and find your model, **the result = mean_reward - std of reward**
-**If you don't find your model, go to the bottom of the page and click on the refresh button.**
-
For more information about the certification process, check this section 👉 https://huggingface.co/deep-rl-course/en/unit0/introduction#certification-process
**To start the hands-on click on Open In Colab button** 👇 :
[](https://colab.research.google.com/github/huggingface/deep-rl-class/blob/master/notebooks/unit6/unit6.ipynb)
-
-# Unit 6: Advantage Actor Critic (A2C) using Robotics Simulations with PyBullet and Panda-Gym 🤖
-
-### 🎮 Environments:
-
-- [PyBullet](https://github.com/bulletphysics/bullet3)
-- [Panda-Gym](https://github.com/qgallouedec/panda-gym)
-
-### 📚 RL-Library:
-
-- [Stable-Baselines3](https://stable-baselines3.readthedocs.io/)
-
-We're constantly trying to improve our tutorials, so **if you find some issues in this notebook**, please [open an issue on the GitHub Repo](https://github.com/huggingface/deep-rl-class/issues).
-
-## Objectives of this notebook 🏆
-
-At the end of the notebook, you will:
-
-- Be able to use the environment librairies **PyBullet** and **Panda-Gym**.
-- Be able to **train robots using A2C**.
-- Understand why **we need to normalize the input**.
-- Be able to **push your trained agent and the code to the Hub** with a nice video replay and an evaluation score 🔥.
-
-## Prerequisites 🏗️
-Before diving into the notebook, you need to:
-
-🔲 📚 Study [Actor-Critic methods by reading Unit 6](https://huggingface.co/deep-rl-course/unit6/introduction) 🤗
-
-# Let's train our first robots 🤖
-
-## Set the GPU 💪
-
-- To **accelerate the agent's training, we'll use a GPU**. To do that, go to `Runtime > Change Runtime type`
-
-
-
-- `Hardware Accelerator > GPU`
-
-
-
-## Create a virtual display 🔽
-
-During the notebook, we'll need to generate a replay video. To do so, with colab, **we need to have a virtual screen to be able to render the environment** (and thus record the frames).
-
-The following cell will install the librairies and create and run a virtual screen 🖥
-
-```python
-%%capture
-!apt install python-opengl
-!apt install ffmpeg
-!apt install xvfb
-!pip3 install pyvirtualdisplay
-```
-
-```python
-# Virtual display
-from pyvirtualdisplay import Display
-
-virtual_display = Display(visible=0, size=(1400, 900))
-virtual_display.start()
-```
-
-### Install dependencies 🔽
-The first step is to install the dependencies, we’ll install multiple ones:
-
-- `pybullet`: Contains the walking robots environments.
-- `panda-gym`: Contains the robotics arm environments.
-- `stable-baselines3[extra]`: The SB3 deep reinforcement learning library.
-- `huggingface_sb3`: Additional code for Stable-baselines3 to load and upload models from the Hugging Face 🤗 Hub.
-- `huggingface_hub`: Library allowing anyone to work with the Hub repositories.
-
-```bash
-!pip install -r https://raw.githubusercontent.com/huggingface/deep-rl-class/main/notebooks/unit6/requirements-unit6.txt
-```
-
-## Import the packages 📦
-
-```python
-import pybullet_envs
-import panda_gym
-import gym
-
-import os
-
-from huggingface_sb3 import load_from_hub, package_to_hub
-
-from stable_baselines3 import A2C
-from stable_baselines3.common.evaluation import evaluate_policy
-from stable_baselines3.common.vec_env import DummyVecEnv, VecNormalize
-from stable_baselines3.common.env_util import make_vec_env
-
-from huggingface_hub import notebook_login
-```
-
-## Environment 1: AntBulletEnv-v0 🕸
-
-### Create the AntBulletEnv-v0
-#### The environment 🎮
-
-In this environment, the agent needs to use its different joints correctly in order to walk.
-You can find a detailled explanation of this environment here: https://hackmd.io/@jeffreymo/SJJrSJh5_#PyBullet
-
-```python
-env_id = "AntBulletEnv-v0"
-# Create the env
-env = gym.make(env_id)
-
-# Get the state space and action space
-s_size = env.observation_space.shape[0]
-a_size = env.action_space
-```
-
-```python
-print("_____OBSERVATION SPACE_____ \n")
-print("The State Space is: ", s_size)
-print("Sample observation", env.observation_space.sample()) # Get a random observation
-```
-
-The observation Space (from [Jeffrey Y Mo](https://hackmd.io/@jeffreymo/SJJrSJh5_#PyBullet)):
-The difference is that our observation space is 28 not 29.
-
-
-
-
-```python
-print("\n _____ACTION SPACE_____ \n")
-print("The Action Space is: ", a_size)
-print("Action Space Sample", env.action_space.sample()) # Take a random action
-```
-
-The action Space (from [Jeffrey Y Mo](https://hackmd.io/@jeffreymo/SJJrSJh5_#PyBullet)):
-
-
-
-
-### Normalize observation and rewards
-
-A good practice in reinforcement learning is to [normalize input features](https://stable-baselines3.readthedocs.io/en/master/guide/rl_tips.html).
-
-For that purpose, there is a wrapper that will compute a running average and standard deviation of input features.
-
-We also normalize rewards with this same wrapper by adding `norm_reward = True`
-
-[You should check the documentation to fill this cell](https://stable-baselines3.readthedocs.io/en/master/guide/vec_envs.html#vecnormalize)
-
-```python
-env = make_vec_env(env_id, n_envs=4)
-
-# Adding this wrapper to normalize the observation and the reward
-env = # TODO: Add the wrapper
-```
-
-#### Solution
-
-```python
-env = make_vec_env(env_id, n_envs=4)
-
-env = VecNormalize(env, norm_obs=True, norm_reward=True, clip_obs=10.0)
-```
-
-### Create the A2C Model 🤖
-
-In this case, because we have a vector of 28 values as input, we'll use an MLP (multi-layer perceptron) as policy.
-
-For more information about A2C implementation with StableBaselines3 check: https://stable-baselines3.readthedocs.io/en/master/modules/a2c.html#notes
-
-To find the best parameters I checked the [official trained agents by Stable-Baselines3 team](https://huggingface.co/sb3).
-
-```python
-model = # Create the A2C model and try to find the best parameters
-```
-
-#### Solution
-
-```python
-model = A2C(
- policy="MlpPolicy",
- env=env,
- gae_lambda=0.9,
- gamma=0.99,
- learning_rate=0.00096,
- max_grad_norm=0.5,
- n_steps=8,
- vf_coef=0.4,
- ent_coef=0.0,
- policy_kwargs=dict(log_std_init=-2, ortho_init=False),
- normalize_advantage=False,
- use_rms_prop=True,
- use_sde=True,
- verbose=1,
-)
-```
-
-### Train the A2C agent 🏃
-
-- Let's train our agent for 2,000,000 timesteps. Don't forget to use GPU on Colab. It will take approximately ~25-40min
-
-```python
-model.learn(2_000_000)
-```
-
-```python
-# Save the model and VecNormalize statistics when saving the agent
-model.save("a2c-AntBulletEnv-v0")
-env.save("vec_normalize.pkl")
-```
-
-### Evaluate the agent 📈
-- Now that our agent is trained, we need to **check its performance**.
-- Stable-Baselines3 provides a method to do that: `evaluate_policy`
-- In my case, I got a mean reward of `2371.90 +/- 16.50`
-
-```python
-from stable_baselines3.common.vec_env import DummyVecEnv, VecNormalize
-
-# Load the saved statistics
-eval_env = DummyVecEnv([lambda: gym.make("AntBulletEnv-v0")])
-eval_env = VecNormalize.load("vec_normalize.pkl", eval_env)
-
-# do not update them at test time
-eval_env.training = False
-# reward normalization is not needed at test time
-eval_env.norm_reward = False
-
-# Load the agent
-model = A2C.load("a2c-AntBulletEnv-v0")
-
-mean_reward, std_reward = evaluate_policy(model, env)
-
-print(f"Mean reward = {mean_reward:.2f} +/- {std_reward:.2f}")
-```
-
-### Publish your trained model on the Hub 🔥
-Now that we saw we got good results after the training, we can publish our trained model on the Hub with one line of code.
-
-📚 The libraries documentation 👉 https://github.com/huggingface/huggingface_sb3/tree/main#hugging-face--x-stable-baselines3-v20
-
-Here's an example of a Model Card (with a PyBullet environment):
-
-
-
-By using `package_to_hub`, as we already mentionned in the former units, **you evaluate, record a replay, generate a model card of your agent and push it to the hub**.
-
-This way:
-- You can **showcase our work** 🔥
-- You can **visualize your agent playing** 👀
-- You can **share an agent with the community that others can use** 💾
-- You can **access a leaderboard 🏆 to see how well your agent is performing compared to your classmates** 👉 https://huggingface.co/spaces/huggingface-projects/Deep-Reinforcement-Learning-Leaderboard
-
-
-To be able to share your model with the community there are three more steps to follow:
-
-1️⃣ (If it's not already done) create an account to HF ➡ https://huggingface.co/join
-
-2️⃣ Sign in and then you need to get your authentication token from the Hugging Face website.
-- Create a new token (https://huggingface.co/settings/tokens) **with write role**
-
-
-
-- Copy the token
-- Run the cell below and paste the token
-
-```python
-notebook_login()
-!git config --global credential.helper store
-```
-
-If you don't want to use Google Colab or a Jupyter Notebook, you need to use this command instead: `huggingface-cli login`
-
-3️⃣ We're now ready to push our trained agent to the 🤗 Hub 🔥 using `package_to_hub()` function
-
-```python
-package_to_hub(
- model=model,
- model_name=f"a2c-{env_id}",
- model_architecture="A2C",
- env_id=env_id,
- eval_env=eval_env,
- repo_id=f"ThomasSimonini/a2c-{env_id}", # Change the username
- commit_message="Initial commit",
-)
-```
-
-## Take a coffee break ☕
-- You already trained your first robot that learned to move congratutlations 🥳!
-- It's **time to take a break**. Don't hesitate to **save this notebook** `File > Save a copy to Drive` to work on this second part later.
-
-
-## Environment 2: PandaReachDense-v2 🦾
-
-The agent we're going to train is a robotic arm that needs to do controls (moving the arm and using the end-effector).
-
-In robotics, the *end-effector* is the device at the end of a robotic arm designed to interact with the environment.
-
-In `PandaReach`, the robot must place its end-effector at a target position (green ball).
-
-We're going to use the dense version of this environment. This means we'll get a *dense reward function* that **will provide a reward at each timestep** (the closer the agent is to completing the task, the higher the reward). This is in contrast to a *sparse reward function* where the environment **return a reward if and only if the task is completed**.
-
-Also, we're going to use the *End-effector displacement control*, which means the **action corresponds to the displacement of the end-effector**. We don't control the individual motion of each joint (joint control).
-
-
-
-
-This way **the training will be easier**.
-
-
-
-In `PandaReachDense-v2`, the robotic arm must place its end-effector at a target position (green ball).
-
-
-
-```python
-import gym
-
-env_id = "PandaReachDense-v2"
-
-# Create the env
-env = gym.make(env_id)
-
-# Get the state space and action space
-s_size = env.observation_space.shape
-a_size = env.action_space
-```
-
-```python
-print("_____OBSERVATION SPACE_____ \n")
-print("The State Space is: ", s_size)
-print("Sample observation", env.observation_space.sample()) # Get a random observation
-```
-
-The observation space **is a dictionary with 3 different elements**:
-- `achieved_goal`: (x,y,z) position of the goal.
-- `desired_goal`: (x,y,z) distance between the goal position and the current object position.
-- `observation`: position (x,y,z) and velocity of the end-effector (vx, vy, vz).
-
-Given it's a dictionary as observation, **we will need to use a MultiInputPolicy policy instead of MlpPolicy**.
-
-```python
-print("\n _____ACTION SPACE_____ \n")
-print("The Action Space is: ", a_size)
-print("Action Space Sample", env.action_space.sample()) # Take a random action
-```
-
-The action space is a vector with 3 values:
-- Control x, y, z movement
-
-Now it's your turn:
-
-1. Define the environment called "PandaReachDense-v2".
-2. Make a vectorized environment.
-3. Add a wrapper to normalize the observations and rewards. [Check the documentation](https://stable-baselines3.readthedocs.io/en/master/guide/vec_envs.html#vecnormalize)
-4. Create the A2C Model (don't forget verbose=1 to print the training logs).
-5. Train it for 1M Timesteps.
-6. Save the model and VecNormalize statistics when saving the agent.
-7. Evaluate your agent.
-8. Publish your trained model on the Hub 🔥 with `package_to_hub`.
-
-### Solution (fill the todo)
-
-```python
-# 1 - 2
-env_id = "PandaReachDense-v2"
-env = make_vec_env(env_id, n_envs=4)
-
-# 3
-env = VecNormalize(env, norm_obs=True, norm_reward=False, clip_obs=10.0)
-
-# 4
-model = A2C(policy="MultiInputPolicy", env=env, verbose=1)
-# 5
-model.learn(1_000_000)
-```
-
-```python
-# 6
-model_name = "a2c-PandaReachDense-v2"
-model.save(model_name)
-env.save("vec_normalize.pkl")
-
-# 7
-from stable_baselines3.common.vec_env import DummyVecEnv, VecNormalize
-
-# Load the saved statistics
-eval_env = DummyVecEnv([lambda: gym.make("PandaReachDense-v2")])
-eval_env = VecNormalize.load("vec_normalize.pkl", eval_env)
-
-# do not update them at test time
-eval_env.training = False
-# reward normalization is not needed at test time
-eval_env.norm_reward = False
-
-# Load the agent
-model = A2C.load(model_name)
-
-mean_reward, std_reward = evaluate_policy(model, env)
-
-print(f"Mean reward = {mean_reward:.2f} +/- {std_reward:.2f}")
-
-# 8
-package_to_hub(
- model=model,
- model_name=f"a2c-{env_id}",
- model_architecture="A2C",
- env_id=env_id,
- eval_env=eval_env,
- repo_id=f"ThomasSimonini/a2c-{env_id}", # TODO: Change the username
- commit_message="Initial commit",
-)
-```
-
-## Some additional challenges 🏆
-
-The best way to learn **is to try things on your own**! Why not try `HalfCheetahBulletEnv-v0` for PyBullet and `PandaPickAndPlace-v1` for Panda-Gym?
-
-If you want to try more advanced tasks for panda-gym, you need to check what was done using **TQC or SAC** (a more sample-efficient algorithm suited for robotics tasks). In real robotics, you'll use a more sample-efficient algorithm for a simple reason: contrary to a simulation **if you move your robotic arm too much, you have a risk of breaking it**.
-
-PandaPickAndPlace-v1: https://huggingface.co/sb3/tqc-PandaPickAndPlace-v1
-
-And don't hesitate to check panda-gym documentation here: https://panda-gym.readthedocs.io/en/latest/usage/train_with_sb3.html
-
-Here are some ideas to go further:
-* Train more steps
-* Try different hyperparameters by looking at what your classmates have done 👉 https://huggingface.co/models?other=https://huggingface.co/models?other=AntBulletEnv-v0
-* **Push your new trained model** on the Hub 🔥
-
-
-See you on Unit 7! 🔥
-## Keep learning, stay awesome 🤗
diff --git a/units/en/unit6/introduction.mdx b/units/en/unit6/introduction.mdx
index 4be735f..862c8c4 100644
--- a/units/en/unit6/introduction.mdx
+++ b/units/en/unit6/introduction.mdx
@@ -16,8 +16,7 @@ So today we'll study **Actor-Critic methods**, a hybrid architecture combining v
- *A Critic* that measures **how good the taken action is** (Value-Based method)
-We'll study one of these hybrid methods, Advantage Actor Critic (A2C), **and train our agent using Stable-Baselines3 in robotic environments**. We'll train two robots:
-- A spider 🕷️ to learn to move.
+We'll study one of these hybrid methods, Advantage Actor Critic (A2C), **and train our agent using Stable-Baselines3 in robotic environments**. We'll train:
- A robotic arm 🦾 to move to the correct position.