diff --git a/units/en/unit3/hands-on.mdx b/units/en/unit3/hands-on.mdx index fffa6a6..e9c07cf 100644 --- a/units/en/unit3/hands-on.mdx +++ b/units/en/unit3/hands-on.mdx @@ -16,6 +16,7 @@ Now that you've studied the theory behind Deep Q-Learning, **you’re ready to t We're using the [RL-Baselines-3 Zoo integration](https://github.com/DLR-RM/rl-baselines3-zoo), a vanilla version of Deep Q-Learning with no extensions such as Double-DQN, Dueling-DQN, or Prioritized Experience Replay. +Also, **if you want to learn to implement Deep Q-Learning by yourself after this hands-on**, you definitely should look at CleanRL implementation: https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/dqn_atari.py To validate this hands-on for the certification process, you need to push your trained model to the Hub and **get a result of >= 500**. @@ -75,6 +76,7 @@ To find your result, go to the leaderboard and find your model, **the result = m For more information about the certification process, check this section 👉 https://huggingface.co/deep-rl-course/en/unit0/introduction#certification-process ## Set the GPU 💪 + - To **accelerate the agent's training, we'll use a GPU**. To do that, go to `Runtime > Change Runtime type` GPU Step 1