Update hands-on.mdx

This commit is contained in:
Thomas Simonini
2023-01-04 21:25:31 +01:00
committed by GitHub
parent d4b6b46257
commit 017465ef4c

View File

@@ -18,7 +18,7 @@ We're using the [RL-Baselines-3 Zoo integration](https://github.com/DLR-RM/rl-ba
Also, **if you want to learn to implement Deep Q-Learning by yourself after this hands-on**, you definitely should look at CleanRL implementation: https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/dqn_atari.py
To validate this hands-on for the certification process, you need to push your trained model to the Hub and **get a result of >= 500**.
To validate this hands-on for the certification process, you need to push your trained model to the Hub and **get a result of >= 200**.
To find your result, go to the leaderboard and find your model, **the result = mean_reward - std of reward**
@@ -68,13 +68,6 @@ Before diving into the notebook, you need to:
We're constantly trying to improve our tutorials, so **if you find some issues in this notebook**, please [open an issue on the Github Repo](https://github.com/huggingface/deep-rl-class/issues).
# Let's train a Deep Q-Learning agent playing Atari' Space Invaders 👾 and upload it to the Hub.
To validate this hands-on for the certification process, you need to push your trained model to the Hub and **get a result of >= 500**.
To find your result, go to the leaderboard and find your model, **the result = mean_reward - std of reward**
For more information about the certification process, check this section 👉 https://huggingface.co/deep-rl-course/en/unit0/introduction#certification-process
## Set the GPU 💪
- To **accelerate the agent's training, we'll use a GPU**. To do that, go to `Runtime > Change Runtime type`