mirror of
https://github.com/huggingface/deep-rl-class.git
synced 2026-05-16 13:55:52 +08:00
25 lines
1.1 KiB
Plaintext
25 lines
1.1 KiB
Plaintext
# Hands-on
|
|
|
|
|
|
Now that we studied the theory behind PPO
|
|
|
|
|
|
|
|
<CourseFloatingBanner classNames="absolute z-10 right-0 top-0"
|
|
notebooks={[
|
|
{label: "Google Colab", value: "https://colab.research.google.com/github/huggingface/deep-rl-class/blob/main/notebooks/unit8/unit8.ipynb"}
|
|
]}
|
|
askForHelpUrl="http://hf.co/join/discord" />
|
|
|
|
TODO ADD HANDS ON IDEA
|
|
|
|
To validate this hands-on for the certification process, you need to push your trained model to the Hub and **get a result of TODO ADD RESULT
|
|
|
|
To find your result, go to the [leaderboard](https://huggingface.co/spaces/huggingface-projects/Deep-Reinforcement-Learning-Leaderboard) and find your model, **the result = mean_reward - std of reward**
|
|
|
|
For more information about the certification process, check this section 👉 https://huggingface.co/deep-rl-course/en/unit0/introduction#certification-process
|
|
|
|
**To start the hands-on click on Open In Colab button** 👇 :
|
|
|
|
[](https://colab.research.google.com/github/huggingface/deep-rl-class/blob/master/notebooks/unit8/unit8.ipynb)
|