mirror of
https://github.com/huggingface/deep-rl-class.git
synced 2026-04-13 15:59:44 +08:00
17 lines
1.4 KiB
Markdown
17 lines
1.4 KiB
Markdown
# Automatic Hyperparameter Tuning with Optuna
|
||
|
||
One of the most critical task in Deep Reinforcement Learning is to **find a good set of training hyperparameters**.
|
||
|
||
<img src="https://raw.githubusercontent.com/optuna/optuna/master/docs/image/optuna-logo.png" alt="Optuna"/>
|
||
|
||
Optuna is a library that **helps you to automate the search**. In this Unit, we'll study a little bit of the theory behind automatic hyperparameter tuning. We'll then try to optimize the parameters manually and then see how to automate the search using Optuna.
|
||
|
||
The content below comes from [Antonin's Raffin ICRA 2022 presentations](https://twitter.com/araffin2), he's one of the founders of Stable-Baselines and RL-Baselines3-Zoo.
|
||
|
||
## The learning steps 📚
|
||
1️⃣ 📹 First, let study what's [Automatic Hyperparameter Tuning](https://www.youtube.com/watch?v=AidFTOdGNFQ). Don't forget to 👍 the video 🤗.
|
||
|
||
2️⃣👩💻 Then let's dive on the [hands-on, where we'll then try to optimize the parameters manually and then see how to automate the search using Optuna](https://youtu.be/ihP7E76KGOI).
|
||
|
||
3️⃣ Now that you've learned to use Optuna, why not going back to our **Deep Q-Learning hands-on and implement Optuna to find the best training hyperparameters** 👉 [](https://colab.research.google.com/github/araffin/tools-for-robotic-rl-icra2022/blob/main/notebooks/optuna_lab.ipynb)
|