mirror of
https://github.com/huggingface/deep-rl-class.git
synced 2026-04-13 18:00:45 +08:00
Merge pull request #335 from huggingface/GymnasiumUpdate/Unit3
Gymnasium Update Unit 3
This commit is contained in:
@@ -1,833 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "view-in-github",
|
||||
"colab_type": "text"
|
||||
},
|
||||
"source": [
|
||||
"<a href=\"https://colab.research.google.com/github/huggingface/deep-rl-class/blob/main/notebooks/unit3.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "k7xBVPzoXxOg"
|
||||
},
|
||||
"source": [
|
||||
"# Unit 3: Deep Q-Learning with Atari Games 👾 using RL Baselines3 Zoo\n",
|
||||
"\n",
|
||||
"<img src=\"https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit4/thumbnail.jpg\" alt=\"Unit 3 Thumbnail\">\n",
|
||||
"\n",
|
||||
"In this notebook, **you'll train a Deep Q-Learning agent** playing Space Invaders using [RL Baselines3 Zoo](https://github.com/DLR-RM/rl-baselines3-zoo), a training framework based on [Stable-Baselines3](https://stable-baselines3.readthedocs.io/en/master/) that provides scripts for training, evaluating agents, tuning hyperparameters, plotting results and recording videos.\n",
|
||||
"\n",
|
||||
"We're using the [RL-Baselines-3 Zoo integration, a vanilla version of Deep Q-Learning](https://stable-baselines3.readthedocs.io/en/master/modules/dqn.html) with no extensions such as Double-DQN, Dueling-DQN, and Prioritized Experience Replay.\n",
|
||||
"\n",
|
||||
"⬇️ Here is an example of what **you will achieve** ⬇️"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "J9S713biXntc"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%html\n",
|
||||
"<video controls autoplay><source src=\"https://huggingface.co/ThomasSimonini/ppo-SpaceInvadersNoFrameskip-v4/resolve/main/replay.mp4\" type=\"video/mp4\"></video>"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"### 🎮 Environments: \n",
|
||||
"\n",
|
||||
"- [SpacesInvadersNoFrameskip-v4](https://gymnasium.farama.org/environments/atari/space_invaders/)\n",
|
||||
"\n",
|
||||
"You can see the difference between Space Invaders versions here 👉 https://gymnasium.farama.org/environments/atari/space_invaders/#variants\n",
|
||||
"\n",
|
||||
"### 📚 RL-Library: \n",
|
||||
"\n",
|
||||
"- [RL-Baselines3-Zoo](https://github.com/DLR-RM/rl-baselines3-zoo)"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "ykJiGevCMVc5"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "wciHGjrFYz9m"
|
||||
},
|
||||
"source": [
|
||||
"## Objectives of this notebook 🏆\n",
|
||||
"At the end of the notebook, you will:\n",
|
||||
"- Be able to understand deeper **how RL Baselines3 Zoo works**.\n",
|
||||
"- Be able to **push your trained agent and the code to the Hub** with a nice video replay and an evaluation score 🔥.\n",
|
||||
"\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"## This notebook is from Deep Reinforcement Learning Course\n",
|
||||
"<img src=\"https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/deep-rl-course-illustration.jpg\" alt=\"Deep RL Course illustration\"/>"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "TsnP0rjxMn1e"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "nw6fJHIAZd-J"
|
||||
},
|
||||
"source": [
|
||||
"In this free course, you will:\n",
|
||||
"\n",
|
||||
"- 📖 Study Deep Reinforcement Learning in **theory and practice**.\n",
|
||||
"- 🧑💻 Learn to **use famous Deep RL libraries** such as Stable Baselines3, RL Baselines3 Zoo, CleanRL and Sample Factory 2.0.\n",
|
||||
"- 🤖 Train **agents in unique environments** \n",
|
||||
"\n",
|
||||
"And more check 📚 the syllabus 👉 https://simoninithomas.github.io/deep-rl-course\n",
|
||||
"\n",
|
||||
"Don’t forget to **<a href=\"http://eepurl.com/ic5ZUD\">sign up to the course</a>** (we are collecting your email to be able to **send you the links when each Unit is published and give you information about the challenges and updates).**\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"The best way to keep in touch is to join our discord server to exchange with the community and with us 👉🏻 https://discord.gg/ydHrjt3WP5"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "0vgANIBBZg1p"
|
||||
},
|
||||
"source": [
|
||||
"## Prerequisites 🏗️\n",
|
||||
"Before diving into the notebook, you need to:\n",
|
||||
"\n",
|
||||
"🔲 📚 **[Study Deep Q-Learning by reading Unit 3](https://huggingface.co/deep-rl-course/unit3/introduction)** 🤗 "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"We're constantly trying to improve our tutorials, so **if you find some issues in this notebook**, please [open an issue on the Github Repo](https://github.com/huggingface/deep-rl-class/issues)."
|
||||
],
|
||||
"metadata": {
|
||||
"id": "7kszpGFaRVhq"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "QR0jZtYreSI5"
|
||||
},
|
||||
"source": [
|
||||
"# Let's train a Deep Q-Learning agent playing Atari' Space Invaders 👾 and upload it to the Hub.\n",
|
||||
"\n",
|
||||
"We strongly recommend students **to use Google Colab for the hands-on exercises instead of running them on their personal computers**.\n",
|
||||
"\n",
|
||||
"By using Google Colab, **you can focus on learning and experimenting without worrying about the technical aspects of setting up your environments**.\n",
|
||||
"\n",
|
||||
"To validate this hands-on for the certification process, you need to push your trained model to the Hub and **get a result of >= 200**.\n",
|
||||
"\n",
|
||||
"To find your result, go to the leaderboard and find your model, **the result = mean_reward - std of reward**\n",
|
||||
"\n",
|
||||
"For more information about the certification process, check this section 👉 https://huggingface.co/deep-rl-course/en/unit0/introduction#certification-process"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"## An advice 💡\n",
|
||||
"It's better to run this colab in a copy on your Google Drive, so that **if it timeouts** you still have the saved notebook on your Google Drive and do not need to fill everything from scratch.\n",
|
||||
"\n",
|
||||
"To do that you can either do `Ctrl + S` or `File > Save a copy in Google Drive.`\n",
|
||||
"\n",
|
||||
"Also, we're going to **train it for 90 minutes with 1M timesteps**. By typing `!nvidia-smi` will tell you what GPU you're using.\n",
|
||||
"\n",
|
||||
"And if you want to train more such 10 million steps, this will take about 9 hours, potentially resulting in Colab timing out. In that case, I recommend running this on your local computer (or somewhere else). Just click on: `File>Download`. "
|
||||
],
|
||||
"metadata": {
|
||||
"id": "Nc8BnyVEc3Ys"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"## Set the GPU 💪\n",
|
||||
"- To **accelerate the agent's training, we'll use a GPU**. To do that, go to `Runtime > Change Runtime type`\n",
|
||||
"\n",
|
||||
"<img src=\"https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/gpu-step1.jpg\" alt=\"GPU Step 1\">"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "PU4FVzaoM6fC"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"- `Hardware Accelerator > GPU`\n",
|
||||
"\n",
|
||||
"<img src=\"https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/gpu-step2.jpg\" alt=\"GPU Step 2\">"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "KV0NyFdQM9ZG"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"# Install RL-Baselines3 Zoo and its dependencies 📚\n",
|
||||
"\n",
|
||||
"If you see `ERROR: pip's dependency resolver does not currently take into account all the packages that are installed.` **this is normal and it's not a critical error** there's a conflict of version. But the packages we need are installed."
|
||||
],
|
||||
"metadata": {
|
||||
"id": "wS_cVefO-aYg"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"# For now we install this update of RL-Baselines3 Zoo\n",
|
||||
"!pip install git+https://github.com/DLR-RM/rl-baselines3-zoo@update/hf"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "hLTwHqIWdnPb"
|
||||
},
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"IF AND ONLY IF THE VERSION ABOVE DOES NOT EXIST ANYMORE. UNCOMMENT AND INSTALL THE ONE BELOW"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "p0xe2sJHdtHy"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"#!pip install rl_zoo3==2.0.0a9"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "N0d6wy-F-f39"
|
||||
},
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"!apt-get install swig cmake ffmpeg"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "8_MllY6Om1eI"
|
||||
},
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "4S9mJiKg6SqC"
|
||||
},
|
||||
"source": [
|
||||
"To be able to use Atari games in Gymnasium we need to install atari package. And accept-rom-license to download the rom files (games files)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"!pip install gymnasium[atari]\n",
|
||||
"!pip install gymnasium[accept-rom-license]"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "NsRP-lX1_2fC"
|
||||
},
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"## Create a virtual display 🔽\n",
|
||||
"\n",
|
||||
"During the notebook, we'll need to generate a replay video. To do so, with colab, **we need to have a virtual screen to be able to render the environment** (and thus record the frames). \n",
|
||||
"\n",
|
||||
"Hence the following cell will install the librairies and create and run a virtual screen 🖥"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "bTpYcVZVMzUI"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "jV6wjQ7Be7p5"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%capture\n",
|
||||
"!apt install python-opengl\n",
|
||||
"!apt install ffmpeg\n",
|
||||
"!apt install xvfb\n",
|
||||
"!pip3 install pyvirtualdisplay"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"# Virtual display\n",
|
||||
"from pyvirtualdisplay import Display\n",
|
||||
"\n",
|
||||
"virtual_display = Display(visible=0, size=(1400, 900))\n",
|
||||
"virtual_display.start()"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "BE5JWP5rQIKf"
|
||||
},
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "5iPgzluo9z-u"
|
||||
},
|
||||
"source": [
|
||||
"## Train our Deep Q-Learning Agent to Play Space Invaders 👾\n",
|
||||
"\n",
|
||||
"To train an agent with RL-Baselines3-Zoo, we just need to do two things:\n",
|
||||
"\n",
|
||||
"1. Create a hyperparameter config file that will contain our training hyperparameters called `dqn.yml`.\n",
|
||||
"\n",
|
||||
"This is a template example:\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"SpaceInvadersNoFrameskip-v4:\n",
|
||||
" env_wrapper:\n",
|
||||
" - stable_baselines3.common.atari_wrappers.AtariWrapper\n",
|
||||
" frame_stack: 4\n",
|
||||
" policy: 'CnnPolicy'\n",
|
||||
" n_timesteps: !!float 1e7\n",
|
||||
" buffer_size: 100000\n",
|
||||
" learning_rate: !!float 1e-4\n",
|
||||
" batch_size: 32\n",
|
||||
" learning_starts: 100000\n",
|
||||
" target_update_interval: 1000\n",
|
||||
" train_freq: 4\n",
|
||||
" gradient_steps: 1\n",
|
||||
" exploration_fraction: 0.1\n",
|
||||
" exploration_final_eps: 0.01\n",
|
||||
" # If True, you need to deactivate handle_timeout_termination\n",
|
||||
" # in the replay_buffer_kwargs\n",
|
||||
" optimize_memory_usage: False\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "_VjblFSVDQOj"
|
||||
},
|
||||
"source": [
|
||||
"Here we see that:\n",
|
||||
"- We use the `Atari Wrapper` that preprocess the input (Frame reduction ,grayscale, stack 4 frames)\n",
|
||||
"- We use `CnnPolicy`, since we use Convolutional layers to process the frames\n",
|
||||
"- We train it for 10 million `n_timesteps` \n",
|
||||
"- Memory (Experience Replay) size is 100000, aka the amount of experience steps you saved to train again your agent with.\n",
|
||||
"\n",
|
||||
"💡 My advice is to **reduce the training timesteps to 1M,** which will take about 90 minutes on a P100. `!nvidia-smi` will tell you what GPU you're using. At 10 million steps, this will take about 9 hours, which could likely result in Colab timing out. I recommend running this on your local computer (or somewhere else). Just click on: `File>Download`. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "5qTkbWrkECOJ"
|
||||
},
|
||||
"source": [
|
||||
"In terms of hyperparameters optimization, my advice is to focus on these 3 hyperparameters:\n",
|
||||
"- `learning_rate`\n",
|
||||
"- `buffer_size (Experience Memory size)`\n",
|
||||
"- `batch_size`\n",
|
||||
"\n",
|
||||
"As a good practice, you need to **check the documentation to understand what each hyperparameters does**: https://stable-baselines3.readthedocs.io/en/master/modules/dqn.html#parameters\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "Hn8bRTHvERRL"
|
||||
},
|
||||
"source": [
|
||||
"2. We start the training and save the models on `logs` folder 📁\n",
|
||||
"\n",
|
||||
"- Define the algorithm after `--algo`, where we save the model after `-f` and where the hyperparameter config is after `-c`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "Xr1TVW4xfbz3"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!python -m rl_zoo3.train --algo ________ --env SpaceInvadersNoFrameskip-v4 -f _________ -c _________"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "SeChoX-3SZfP"
|
||||
},
|
||||
"source": [
|
||||
"#### Solution"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "PuocgdokSab9"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -c dqn.yml"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "_dLomIiMKQaf"
|
||||
},
|
||||
"source": [
|
||||
"## Let's evaluate our agent 👀\n",
|
||||
"- RL-Baselines3-Zoo provides `enjoy.py`, a python script to evaluate our agent. In most RL libraries, we call the evaluation script `enjoy.py`.\n",
|
||||
"- Let's evaluate it for 5000 timesteps 🔥"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "co5um_KeKbBJ"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 --no-render --n-timesteps _________ --folder logs/ "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "Q24K1tyWSj7t"
|
||||
},
|
||||
"source": [
|
||||
"#### Solution"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "P_uSmwGRSk0z"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 --no-render --n-timesteps 5000 --folder logs/"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "liBeTltiHJtr"
|
||||
},
|
||||
"source": [
|
||||
"## Publish our trained model on the Hub 🚀\n",
|
||||
"Now that we saw we got good results after the training, we can publish our trained model on the hub 🤗 with one line of code.\n",
|
||||
"\n",
|
||||
"<img src=\"https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/unit3/space-invaders-model.gif\" alt=\"Space Invaders model\">"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "ezbHS1q3HYVV"
|
||||
},
|
||||
"source": [
|
||||
"By using `rl_zoo3.push_to_hub` **you evaluate, record a replay, generate a model card of your agent and push it to the hub**.\n",
|
||||
"\n",
|
||||
"This way:\n",
|
||||
"- You can **showcase our work** 🔥\n",
|
||||
"- You can **visualize your agent playing** 👀\n",
|
||||
"- You can **share with the community an agent that others can use** 💾\n",
|
||||
"- You can **access a leaderboard 🏆 to see how well your agent is performing compared to your classmates** 👉 https://huggingface.co/spaces/huggingface-projects/Deep-Reinforcement-Learning-Leaderboard"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "XMSeZRBiHk6X"
|
||||
},
|
||||
"source": [
|
||||
"To be able to share your model with the community there are three more steps to follow:\n",
|
||||
"\n",
|
||||
"1️⃣ (If it's not already done) create an account to HF ➡ https://huggingface.co/join\n",
|
||||
"\n",
|
||||
"2️⃣ Sign in and then, you need to store your authentication token from the Hugging Face website.\n",
|
||||
"- Create a new token (https://huggingface.co/settings/tokens) **with write role**\n",
|
||||
"\n",
|
||||
"<img src=\"https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/create-token.jpg\" alt=\"Create HF Token\">"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "9O6FI0F8HnzE"
|
||||
},
|
||||
"source": [
|
||||
"- Copy the token \n",
|
||||
"- Run the cell below and past the token"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "Ppu9yePwHrZX"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from huggingface_hub import notebook_login # To log to our Hugging Face account to be able to upload models to the Hub.\n",
|
||||
"notebook_login()\n",
|
||||
"!git config --global credential.helper store"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "2RVEdunPHs8B"
|
||||
},
|
||||
"source": [
|
||||
"If you don't want to use a Google Colab or a Jupyter Notebook, you need to use this command instead: `huggingface-cli login`"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "dSLwdmvhHvjw"
|
||||
},
|
||||
"source": [
|
||||
"3️⃣ We're now ready to push our trained agent to the 🤗 Hub 🔥"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "PW436XnhHw1H"
|
||||
},
|
||||
"source": [
|
||||
"Let's run push_to_hub.py file to upload our trained agent to the Hub.\n",
|
||||
"\n",
|
||||
"`--repo-name `: The name of the repo\n",
|
||||
"\n",
|
||||
"`-orga`: Your Hugging Face username\n",
|
||||
"\n",
|
||||
"`-f`: Where the trained model folder is (in our case `logs`)\n",
|
||||
"\n",
|
||||
"<img src=\"https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/unit3/select-id.png\" alt=\"Select Id\">"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "Ygk2sEktTDEw"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 --repo-name _____________________ -orga _____________________ -f logs/"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "otgpa0rhS9wR"
|
||||
},
|
||||
"source": [
|
||||
"#### Solution"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "_HQNlAXuEhci"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 --repo-name dqn-SpaceInvadersNoFrameskip-v4 -orga ThomasSimonini -f logs/"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "0D4F5zsTTJ-L"
|
||||
},
|
||||
"source": [
|
||||
"###."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "ff89kd2HL1_s"
|
||||
},
|
||||
"source": [
|
||||
"Congrats 🥳 you've just trained and uploaded your first Deep Q-Learning agent using RL-Baselines-3 Zoo. The script above should have displayed a link to a model repository such as https://huggingface.co/ThomasSimonini/dqn-SpaceInvadersNoFrameskip-v4. When you go to this link, you can:\n",
|
||||
"\n",
|
||||
"- See a **video preview of your agent** at the right. \n",
|
||||
"- Click \"Files and versions\" to see all the files in the repository.\n",
|
||||
"- Click \"Use in stable-baselines3\" to get a code snippet that shows how to load the model.\n",
|
||||
"- A model card (`README.md` file) which gives a description of the model and the hyperparameters you used.\n",
|
||||
"\n",
|
||||
"Under the hood, the Hub uses git-based repositories (don't worry if you don't know what git is), which means you can update the model with new versions as you experiment and improve your agent.\n",
|
||||
"\n",
|
||||
"**Compare the results of your agents with your classmates** using the [leaderboard](https://huggingface.co/spaces/huggingface-projects/Deep-Reinforcement-Learning-Leaderboard) 🏆"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "fyRKcCYY-dIo"
|
||||
},
|
||||
"source": [
|
||||
"## Load a powerful trained model 🔥\n",
|
||||
"- The Stable-Baselines3 team uploaded **more than 150 trained Deep Reinforcement Learning agents on the Hub**.\n",
|
||||
"\n",
|
||||
"You can find them here: 👉 https://huggingface.co/sb3\n",
|
||||
"\n",
|
||||
"Some examples:\n",
|
||||
"- Asteroids: https://huggingface.co/sb3/dqn-AsteroidsNoFrameskip-v4\n",
|
||||
"- Beam Rider: https://huggingface.co/sb3/dqn-BeamRiderNoFrameskip-v4\n",
|
||||
"- Breakout: https://huggingface.co/sb3/dqn-BreakoutNoFrameskip-v4\n",
|
||||
"- Road Runner: https://huggingface.co/sb3/dqn-RoadRunnerNoFrameskip-v4\n",
|
||||
"\n",
|
||||
"Let's load an agent playing Beam Rider: https://huggingface.co/sb3/dqn-BeamRiderNoFrameskip-v4"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "B-9QVFIROI5Y"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%html\n",
|
||||
"<video controls autoplay><source src=\"https://huggingface.co/sb3/dqn-BeamRiderNoFrameskip-v4/resolve/main/replay.mp4\" type=\"video/mp4\"></video>"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "7ZQNY_r6NJtC"
|
||||
},
|
||||
"source": [
|
||||
"1. We download the model using `rl_zoo3.load_from_hub`, and place it in a new folder that we can call `rl_trained`"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "OdBNZHy0NGTR"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Download model and save it into the logs/ folder\n",
|
||||
"!python -m rl_zoo3.load_from_hub --algo dqn --env BeamRiderNoFrameskip-v4 -orga sb3 -f rl_trained/"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "LFt6hmWsNdBo"
|
||||
},
|
||||
"source": [
|
||||
"2. Let's evaluate if for 5000 timesteps"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "aOxs0rNuN0uS"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!python -m rl_zoo3.enjoy --algo dqn --env BeamRiderNoFrameskip-v4 -n 5000 -f rl_trained/ --no-render"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "kxMDuDfPON57"
|
||||
},
|
||||
"source": [
|
||||
"Why not trying to train your own **Deep Q-Learning Agent playing BeamRiderNoFrameskip-v4? 🏆.**\n",
|
||||
"\n",
|
||||
"If you want to try, check https://huggingface.co/sb3/dqn-BeamRiderNoFrameskip-v4#hyperparameters **in the model card, you have the hyperparameters of the trained agent.**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "xL_ZtUgpOuY6"
|
||||
},
|
||||
"source": [
|
||||
"But finding hyperparameters can be a daunting task. Fortunately, we'll see in the next Unit, how we can **use Optuna for optimizing the Hyperparameters 🔥.**\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "-pqaco8W-huW"
|
||||
},
|
||||
"source": [
|
||||
"## Some additional challenges 🏆\n",
|
||||
"The best way to learn **is to try things by your own**!\n",
|
||||
"\n",
|
||||
"In the [Leaderboard](https://huggingface.co/spaces/huggingface-projects/Deep-Reinforcement-Learning-Leaderboard) you will find your agents. Can you get to the top?\n",
|
||||
"\n",
|
||||
"Here's a list of environments you can try to train your agent with:\n",
|
||||
"- BeamRiderNoFrameskip-v4\n",
|
||||
"- BreakoutNoFrameskip-v4 \n",
|
||||
"- EnduroNoFrameskip-v4\n",
|
||||
"- PongNoFrameskip-v4\n",
|
||||
"\n",
|
||||
"Also, **if you want to learn to implement Deep Q-Learning by yourself**, you definitely should look at CleanRL implementation: https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/dqn_atari.py\n",
|
||||
"\n",
|
||||
"<img src=\"https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit4/atari-envs.gif\" alt=\"Environments\"/>"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "paS-XKo4-kmu"
|
||||
},
|
||||
"source": [
|
||||
"________________________________________________________________________\n",
|
||||
"Congrats on finishing this chapter!\n",
|
||||
"\n",
|
||||
"If you’re still feel confused with all these elements...it's totally normal! **This was the same for me and for all people who studied RL.**\n",
|
||||
"\n",
|
||||
"Take time to really **grasp the material before continuing and try the additional challenges**. It’s important to master these elements and having a solid foundations.\n",
|
||||
"\n",
|
||||
"In the next unit, **we’re going to learn about [Optuna](https://optuna.org/)**. One of the most critical task in Deep Reinforcement Learning is to find a good set of training hyperparameters. And Optuna is a library that helps you to automate the search.\n",
|
||||
"\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "5WRx7tO7-mvC"
|
||||
},
|
||||
"source": [
|
||||
"\n",
|
||||
"\n",
|
||||
"### This is a course built with you 👷🏿♀️\n",
|
||||
"\n",
|
||||
"Finally, we want to improve and update the course iteratively with your feedback. If you have some, please fill this form 👉 https://forms.gle/3HgA7bEHwAmmLfwh9\n",
|
||||
"\n",
|
||||
"We're constantly trying to improve our tutorials, so **if you find some issues in this notebook**, please [open an issue on the Github Repo](https://github.com/huggingface/deep-rl-class/issues)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"See you on Bonus unit 2! 🔥 "
|
||||
],
|
||||
"metadata": {
|
||||
"id": "Kc3udPT-RcXc"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "fS3Xerx0fIMV"
|
||||
},
|
||||
"source": [
|
||||
"### Keep Learning, Stay Awesome 🤗"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"private_outputs": true,
|
||||
"provenance": [],
|
||||
"include_colab_link": true
|
||||
},
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.6"
|
||||
},
|
||||
"varInspector": {
|
||||
"cols": {
|
||||
"lenName": 16,
|
||||
"lenType": 16,
|
||||
"lenVar": 40
|
||||
},
|
||||
"kernels_config": {
|
||||
"python": {
|
||||
"delete_cmd_postfix": "",
|
||||
"delete_cmd_prefix": "del ",
|
||||
"library": "var_list.py",
|
||||
"varRefreshCmd": "print(var_dic_list())"
|
||||
},
|
||||
"r": {
|
||||
"delete_cmd_postfix": ") ",
|
||||
"delete_cmd_prefix": "rm(",
|
||||
"library": "var_list.r",
|
||||
"varRefreshCmd": "cat(var_dic_list()) "
|
||||
}
|
||||
},
|
||||
"types_to_exclude": [
|
||||
"module",
|
||||
"function",
|
||||
"builtin_function_or_method",
|
||||
"instance",
|
||||
"_Feature"
|
||||
],
|
||||
"window_display": false
|
||||
},
|
||||
"accelerator": "GPU",
|
||||
"gpuClass": "standard"
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 0
|
||||
}
|
||||
@@ -7,7 +7,7 @@
|
||||
"colab_type": "text"
|
||||
},
|
||||
"source": [
|
||||
"<a href=\"https://colab.research.google.com/github/huggingface/deep-rl-class/blob/ThomasSimonini%2FUnit3/notebooks/unit3/unit3.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||
"<a href=\"https://colab.research.google.com/github/huggingface/deep-rl-class/blob/main/notebooks/unit3.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -44,7 +44,9 @@
|
||||
"source": [
|
||||
"### 🎮 Environments: \n",
|
||||
"\n",
|
||||
"- SpacesInvadersNoFrameskip-v4 \n",
|
||||
"- [SpacesInvadersNoFrameskip-v4](https://gymnasium.farama.org/environments/atari/space_invaders/)\n",
|
||||
"\n",
|
||||
"You can see the difference between Space Invaders versions here 👉 https://gymnasium.farama.org/environments/atari/space_invaders/#variants\n",
|
||||
"\n",
|
||||
"### 📚 RL-Library: \n",
|
||||
"\n",
|
||||
@@ -127,6 +129,10 @@
|
||||
"source": [
|
||||
"# Let's train a Deep Q-Learning agent playing Atari' Space Invaders 👾 and upload it to the Hub.\n",
|
||||
"\n",
|
||||
"We strongly recommend students **to use Google Colab for the hands-on exercises instead of running them on their personal computers**.\n",
|
||||
"\n",
|
||||
"By using Google Colab, **you can focus on learning and experimenting without worrying about the technical aspects of setting up your environments**.\n",
|
||||
"\n",
|
||||
"To validate this hands-on for the certification process, you need to push your trained model to the Hub and **get a result of >= 200**.\n",
|
||||
"\n",
|
||||
"To find your result, go to the leaderboard and find your model, **the result = mean_reward - std of reward**\n",
|
||||
@@ -173,6 +179,81 @@
|
||||
"id": "KV0NyFdQM9ZG"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"# Install RL-Baselines3 Zoo and its dependencies 📚\n",
|
||||
"\n",
|
||||
"If you see `ERROR: pip's dependency resolver does not currently take into account all the packages that are installed.` **this is normal and it's not a critical error** there's a conflict of version. But the packages we need are installed."
|
||||
],
|
||||
"metadata": {
|
||||
"id": "wS_cVefO-aYg"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"# For now we install this update of RL-Baselines3 Zoo\n",
|
||||
"!pip install git+https://github.com/DLR-RM/rl-baselines3-zoo@update/hf"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "hLTwHqIWdnPb"
|
||||
},
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"IF AND ONLY IF THE VERSION ABOVE DOES NOT EXIST ANYMORE. UNCOMMENT AND INSTALL THE ONE BELOW"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "p0xe2sJHdtHy"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"#!pip install rl_zoo3==2.0.0a9"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "N0d6wy-F-f39"
|
||||
},
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"!apt-get install swig cmake ffmpeg"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "8_MllY6Om1eI"
|
||||
},
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "4S9mJiKg6SqC"
|
||||
},
|
||||
"source": [
|
||||
"To be able to use Atari games in Gymnasium we need to install atari package. And accept-rom-license to download the rom files (games files)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"!pip install gymnasium[atari]\n",
|
||||
"!pip install gymnasium[accept-rom-license]"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "NsRP-lX1_2fC"
|
||||
},
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
@@ -201,29 +282,6 @@
|
||||
"!pip3 install pyvirtualdisplay"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"# Additional dependencies for RL Baselines3 Zoo\n",
|
||||
"!apt-get install swig cmake freeglut3-dev "
|
||||
],
|
||||
"metadata": {
|
||||
"id": "fWyKJCy_NJBX"
|
||||
},
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"!pip install pyglet==1.5.1"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "C5LwHrISW7Q5"
|
||||
},
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
@@ -234,68 +292,11 @@
|
||||
"virtual_display.start()"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "ww5PQH1gNLI4"
|
||||
"id": "BE5JWP5rQIKf"
|
||||
},
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "mYIMvl5X9NAu"
|
||||
},
|
||||
"source": [
|
||||
"## Clone RL-Baselines3 Zoo Repo 📚\n",
|
||||
"You can now directly install from python package `pip install rl_zoo3` but since we want **the full installation with extra environments and dependencies** we're going to clone `RL-Baselines3-Zoo` repository and install from source."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "eu5ZDPZ09VNQ"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!git clone https://github.com/DLR-RM/rl-baselines3-zoo"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "HCIoSbvbfAQh"
|
||||
},
|
||||
"source": [
|
||||
"## Install dependencies 🔽\n",
|
||||
"We can now install the dependencies RL-Baselines3 Zoo needs (this can take 5min ⏲)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "s2QsFAk29h-D"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%cd /content/rl-baselines3-zoo/ \n",
|
||||
"!git checkout v1.8.0"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"id": "3QaOS7Xj9j1s"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!pip install setuptools==65.5.0\n",
|
||||
"!pip install -r requirements.txt\n",
|
||||
"# Since colab uses Python 3.9 we need to add this installation\n",
|
||||
"!pip install gym[atari,accept-rom-license]==0.21.0"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
@@ -305,9 +306,31 @@
|
||||
"## Train our Deep Q-Learning Agent to Play Space Invaders 👾\n",
|
||||
"\n",
|
||||
"To train an agent with RL-Baselines3-Zoo, we just need to do two things:\n",
|
||||
"1. We define the hyperparameters in `/content/rl-baselines3-zoo/hyperparams/dqn.yml`\n",
|
||||
"\n",
|
||||
"<img src=\"https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/unit3/hyperparameters.png\" alt=\"DQN Hyperparameters\">\n"
|
||||
"1. Create a hyperparameter config file that will contain our training hyperparameters called `dqn.yml`.\n",
|
||||
"\n",
|
||||
"This is a template example:\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"SpaceInvadersNoFrameskip-v4:\n",
|
||||
" env_wrapper:\n",
|
||||
" - stable_baselines3.common.atari_wrappers.AtariWrapper\n",
|
||||
" frame_stack: 4\n",
|
||||
" policy: 'CnnPolicy'\n",
|
||||
" n_timesteps: !!float 1e7\n",
|
||||
" buffer_size: 100000\n",
|
||||
" learning_rate: !!float 1e-4\n",
|
||||
" batch_size: 32\n",
|
||||
" learning_starts: 100000\n",
|
||||
" target_update_interval: 1000\n",
|
||||
" train_freq: 4\n",
|
||||
" gradient_steps: 1\n",
|
||||
" exploration_fraction: 0.1\n",
|
||||
" exploration_final_eps: 0.01\n",
|
||||
" # If True, you need to deactivate handle_timeout_termination\n",
|
||||
" # in the replay_buffer_kwargs\n",
|
||||
" optimize_memory_usage: False\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -346,7 +369,9 @@
|
||||
"id": "Hn8bRTHvERRL"
|
||||
},
|
||||
"source": [
|
||||
"2. We run `train.py` and save the models on `logs` folder 📁"
|
||||
"2. We start the training and save the models on `logs` folder 📁\n",
|
||||
"\n",
|
||||
"- Define the algorithm after `--algo`, where we save the model after `-f` and where the hyperparameter config is after `-c`."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -357,7 +382,7 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!python train.py --algo ________ --env SpaceInvadersNoFrameskip-v4 -f _________"
|
||||
"!python -m rl_zoo3.train --algo ________ --env SpaceInvadersNoFrameskip-v4 -f _________ -c _________"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -377,7 +402,7 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/"
|
||||
"!python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -c dqn.yml"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -399,7 +424,7 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 --no-render --n-timesteps _________ --folder logs/"
|
||||
"!python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 --no-render --n-timesteps _________ --folder logs/ "
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -419,7 +444,7 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 --no-render --n-timesteps 5000 --folder logs/"
|
||||
"!python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 --no-render --n-timesteps 5000 --folder logs/"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -440,7 +465,7 @@
|
||||
"id": "ezbHS1q3HYVV"
|
||||
},
|
||||
"source": [
|
||||
"By using `rl_zoo3.push_to_hub.py` **you evaluate, record a replay, generate a model card of your agent and push it to the hub**.\n",
|
||||
"By using `rl_zoo3.push_to_hub` **you evaluate, record a replay, generate a model card of your agent and push it to the hub**.\n",
|
||||
"\n",
|
||||
"This way:\n",
|
||||
"- You can **showcase our work** 🔥\n",
|
||||
@@ -518,6 +543,8 @@
|
||||
"\n",
|
||||
"`-orga`: Your Hugging Face username\n",
|
||||
"\n",
|
||||
"`-f`: Where the trained model folder is (in our case `logs`)\n",
|
||||
"\n",
|
||||
"<img src=\"https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/unit3/select-id.png\" alt=\"Select Id\">"
|
||||
]
|
||||
},
|
||||
@@ -649,7 +676,7 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!python enjoy.py --algo dqn --env BeamRiderNoFrameskip-v4 -n 5000 -f rl_trained/"
|
||||
"!python -m rl_zoo3.enjoy --algo dqn --env BeamRiderNoFrameskip-v4 -n 5000 -f rl_trained/ --no-render"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -803,4 +830,4 @@
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 0
|
||||
}
|
||||
}
|
||||
@@ -7,7 +7,7 @@
|
||||
"colab_type": "text"
|
||||
},
|
||||
"source": [
|
||||
"<a href=\"https://colab.research.google.com/github/huggingface/deep-rl-class/blob/main/notebooks/unit8/unit8_part1.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||
"<a href=\"https://colab.research.google.com/github/huggingface/deep-rl-class/blob/main/notebooks/unit8_part1.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -156,6 +156,17 @@
|
||||
"id": "bTpYcVZVMzUI"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"!pip install setuptools==65.5.0"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "Fd731S8-NuJA"
|
||||
},
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
@@ -188,17 +199,6 @@
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"!pip install setuptools==65.5.0"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "Fd731S8-NuJA"
|
||||
},
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
@@ -206,16 +206,16 @@
|
||||
},
|
||||
"source": [
|
||||
"## Install dependencies 🔽\n",
|
||||
"For this exercise, we use `gym==0.21` because the video was recorded using Gym.\n"
|
||||
"For this exercise, we use `gym==0.22`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"!pip install gym==0.21\n",
|
||||
"!pip install gym==0.22\n",
|
||||
"!pip install imageio-ffmpeg\n",
|
||||
"!pip install huggingface_hub\n",
|
||||
"!pip install box2d"
|
||||
"!pip install gym[box2d]==0.22"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "9xZQFTPcsKUK"
|
||||
@@ -1353,6 +1353,7 @@
|
||||
"colab": {
|
||||
"private_outputs": true,
|
||||
"provenance": [],
|
||||
"history_visible": true,
|
||||
"include_colab_link": true
|
||||
},
|
||||
"gpuClass": "standard",
|
||||
@@ -1367,4 +1368,4 @@
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 0
|
||||
}
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
@@ -33,46 +33,50 @@ And you can check your progress here 👉 https://huggingface.co/spaces/ThomasSi
|
||||
|
||||
[](https://colab.research.google.com/github/huggingface/deep-rl-class/blob/master/notebooks/unit3/unit3.ipynb)
|
||||
|
||||
|
||||
# Unit 3: Deep Q-Learning with Atari Games 👾 using RL Baselines3 Zoo
|
||||
|
||||
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit4/thumbnail.jpg" alt="Unit 3 Thumbnail">
|
||||
|
||||
In this notebook, **you'll train a Deep Q-Learning agent** playing Space Invaders using [RL Baselines3 Zoo](https://github.com/DLR-RM/rl-baselines3-zoo), a training framework based on [Stable-Baselines3](https://stable-baselines3.readthedocs.io/en/master/) that provides scripts for training, evaluating agents, tuning parameters, plotting results and recording videos.
|
||||
In this hands-on, **you'll train a Deep Q-Learning agent** playing Space Invaders using [RL Baselines3 Zoo](https://github.com/DLR-RM/rl-baselines3-zoo), a training framework based on [Stable-Baselines3](https://stable-baselines3.readthedocs.io/en/master/) that provides scripts for training, evaluating agents, tuning hyperparameters, plotting results and recording videos.
|
||||
|
||||
We're using the [RL-Baselines-3 Zoo integration, a vanilla version of Deep Q-Learning](https://stable-baselines3.readthedocs.io/en/master/modules/dqn.html) with no extensions such as Double-DQN, Dueling-DQN, and Prioritized Experience Replay.
|
||||
|
||||
⬇️ Here is an example of what **you will achieve** ⬇️
|
||||
|
||||
```python
|
||||
%%html
|
||||
<video controls autoplay><source src="https://huggingface.co/ThomasSimonini/ppo-SpaceInvadersNoFrameskip-v4/resolve/main/replay.mp4" type="video/mp4"></video>
|
||||
```
|
||||
|
||||
### 🎮 Environments:
|
||||
|
||||
- SpacesInvadersNoFrameskip-v4
|
||||
- [SpacesInvadersNoFrameskip-v4](https://gymnasium.farama.org/environments/atari/space_invaders/)
|
||||
|
||||
You can see the difference between Space Invaders versions here 👉 https://gymnasium.farama.org/environments/atari/space_invaders/#variants
|
||||
|
||||
### 📚 RL-Library:
|
||||
|
||||
- [RL-Baselines3-Zoo](https://github.com/DLR-RM/rl-baselines3-zoo)
|
||||
|
||||
## Objectives 🏆
|
||||
|
||||
At the end of the notebook, you will:
|
||||
## Objectives of this hands-on 🏆
|
||||
|
||||
At the end of the hands-on, you will:
|
||||
- Be able to understand deeper **how RL Baselines3 Zoo works**.
|
||||
- Be able to **push your trained agent and the code to the Hub** with a nice video replay and an evaluation score 🔥.
|
||||
|
||||
|
||||
## Prerequisites 🏗️
|
||||
Before diving into the notebook, you need to:
|
||||
|
||||
Before diving into the hands-on, you need to:
|
||||
|
||||
🔲 📚 **[Study Deep Q-Learning by reading Unit 3](https://huggingface.co/deep-rl-course/unit3/introduction)** 🤗
|
||||
|
||||
We're constantly trying to improve our tutorials, so **if you find some issues in this notebook**, please [open an issue on the Github Repo](https://github.com/huggingface/deep-rl-class/issues).
|
||||
We're constantly trying to improve our tutorials, so **if you find some issues in this hands-on**, please [open an issue on the Github Repo](https://github.com/huggingface/deep-rl-class/issues).
|
||||
|
||||
# Let's train a Deep Q-Learning agent playing Atari' Space Invaders 👾 and upload it to the Hub.
|
||||
|
||||
We strongly recommend students **to use Google Colab for the hands-on exercises instead of running them on their personal computers**.
|
||||
|
||||
By using Google Colab, **you can focus on learning and experimenting without worrying about the technical aspects of setting up your environments**.
|
||||
|
||||
To validate this hands-on for the certification process, you need to push your trained model to the Hub and **get a result of >= 200**.
|
||||
|
||||
To find your result, go to the leaderboard and find your model, **the result = mean_reward - std of reward**
|
||||
|
||||
For more information about the certification process, check this section 👉 https://huggingface.co/deep-rl-course/en/unit0/introduction#certification-process
|
||||
|
||||
## Set the GPU 💪
|
||||
|
||||
- To **accelerate the agent's training, we'll use a GPU**. To do that, go to `Runtime > Change Runtime type`
|
||||
@@ -83,11 +87,37 @@ We're constantly trying to improve our tutorials, so **if you find some issues i
|
||||
|
||||
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/gpu-step2.jpg" alt="GPU Step 2">
|
||||
|
||||
# Install RL-Baselines3 Zoo and its dependencies 📚
|
||||
|
||||
If you see `ERROR: pip's dependency resolver does not currently take into account all the packages that are installed.` **this is normal and it's not a critical error** there's a conflict of version. But the packages we need are installed.
|
||||
|
||||
```python
|
||||
# For now we install this update of RL-Baselines3 Zoo
|
||||
pip install git+https://github.com/DLR-RM/rl-baselines3-zoo@update/hf
|
||||
```
|
||||
|
||||
IF AND ONLY IF THE VERSION ABOVE DOES NOT EXIST ANYMORE. UNCOMMENT AND INSTALL THE ONE BELOW
|
||||
|
||||
```python
|
||||
#pip install rl_zoo3==2.0.0a9
|
||||
```
|
||||
|
||||
```bash
|
||||
apt-get install swig cmake ffmpeg
|
||||
```
|
||||
|
||||
To be able to use Atari games in Gymnasium we need to install atari package. And accept-rom-license to download the rom files (games files).
|
||||
|
||||
```python
|
||||
!pip install gymnasium[atari]
|
||||
!pip install gymnasium[accept-rom-license]
|
||||
```
|
||||
|
||||
## Create a virtual display 🔽
|
||||
|
||||
During the notebook, we'll need to generate a replay video. To do so, with colab, **we need to have a virtual screen to be able to render the environment** (and thus record the frames).
|
||||
During the hands-on, we'll need to generate a replay video. To do so, if you train it on a headless machine, **we need to have a virtual screen to be able to render the environment** (and thus record the frames).
|
||||
|
||||
The following cell will install the librairies and create and run a virtual screen 🖥
|
||||
Hence the following cell will install the librairies and create and run a virtual screen 🖥
|
||||
|
||||
```bash
|
||||
apt install python-opengl
|
||||
@@ -96,14 +126,6 @@ apt install xvfb
|
||||
pip3 install pyvirtualdisplay
|
||||
```
|
||||
|
||||
```bash
|
||||
apt-get install swig cmake freeglut3-dev
|
||||
```
|
||||
|
||||
```bash
|
||||
pip install pyglet==1.5.1
|
||||
```
|
||||
|
||||
```python
|
||||
# Virtual display
|
||||
from pyvirtualdisplay import Display
|
||||
@@ -112,94 +134,97 @@ virtual_display = Display(visible=0, size=(1400, 900))
|
||||
virtual_display.start()
|
||||
```
|
||||
|
||||
## Clone RL-Baselines3 Zoo Repo 📚
|
||||
You could directly install from the Python package (`pip install rl_zoo3`), but since we want **the full installation with extra environments and dependencies**, we're going to clone the `RL-Baselines3-Zoo` repository and install from source.
|
||||
|
||||
```bash
|
||||
git clone https://github.com/DLR-RM/rl-baselines3-zoo
|
||||
```
|
||||
|
||||
## Install dependencies 🔽
|
||||
We can now install the dependencies RL-Baselines3 Zoo needs (this can take 5min ⏲)
|
||||
|
||||
```bash
|
||||
cd /content/rl-baselines3-zoo/
|
||||
```
|
||||
|
||||
```bash
|
||||
pip install setuptools==65.5.0
|
||||
pip install -r requirements.txt
|
||||
# Since colab uses Python 3.9 we need to add this installation
|
||||
pip install gym[atari,accept-rom-license]==0.21.0
|
||||
```
|
||||
|
||||
## Train our Deep Q-Learning Agent to Play Space Invaders 👾
|
||||
|
||||
To train an agent with RL-Baselines3-Zoo, we just need to do two things:
|
||||
1. We define the hyperparameters in `/content/rl-baselines3-zoo/hyperparams/dqn.yml`
|
||||
|
||||
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/unit3/hyperparameters.png" alt="DQN Hyperparameters">
|
||||
1. Create a hyperparameter config file that will contain our training hyperparameters called `dqn.yml`.
|
||||
|
||||
This is a template example:
|
||||
|
||||
```
|
||||
SpaceInvadersNoFrameskip-v4:
|
||||
env_wrapper:
|
||||
- stable_baselines3.common.atari_wrappers.AtariWrapper
|
||||
frame_stack: 4
|
||||
policy: 'CnnPolicy'
|
||||
n_timesteps: !!float 1e7
|
||||
buffer_size: 100000
|
||||
learning_rate: !!float 1e-4
|
||||
batch_size: 32
|
||||
learning_starts: 100000
|
||||
target_update_interval: 1000
|
||||
train_freq: 4
|
||||
gradient_steps: 1
|
||||
exploration_fraction: 0.1
|
||||
exploration_final_eps: 0.01
|
||||
# If True, you need to deactivate handle_timeout_termination
|
||||
# in the replay_buffer_kwargs
|
||||
optimize_memory_usage: False
|
||||
```
|
||||
|
||||
Here we see that:
|
||||
- We use the `Atari Wrapper` that does the pre-processing (Frame reduction, grayscale, stack four frames),
|
||||
- We use the `CnnPolicy`, since we use Convolutional layers to process the frames.
|
||||
- We train the model for 10 million `n_timesteps`.
|
||||
- Memory (Experience Replay) size is 100000, i.e. the number of experience steps you saved to train again your agent with.
|
||||
- We use the `Atari Wrapper` that preprocess the input (Frame reduction ,grayscale, stack 4 frames)
|
||||
- We use `CnnPolicy`, since we use Convolutional layers to process the frames
|
||||
- We train it for 10 million `n_timesteps`
|
||||
- Memory (Experience Replay) size is 100000, aka the amount of experience steps you saved to train again your agent with.
|
||||
|
||||
💡 My advice is to **reduce the training timesteps to 1M,** which will take about 90 minutes on a P100. `!nvidia-smi` will tell you what GPU you're using. At 10 million steps, this will take about 9 hours, which could likely result in Colab timing out. I recommend running this on your local computer (or somewhere else). Just click on: `File>Download`.
|
||||
💡 My advice is to **reduce the training timesteps to 1M,** which will take about 90 minutes on a P100. `!nvidia-smi` will tell you what GPU you're using. At 10 million steps, this will take about 9 hours. I recommend running this on your local computer (or somewhere else). Just click on: `File>Download`.
|
||||
|
||||
In terms of hyperparameters optimization, my advice is to focus on these 3 hyperparameters:
|
||||
- `learning_rate`
|
||||
- `buffer_size (Experience Memory size)`
|
||||
- `batch_size`
|
||||
|
||||
As a good practice, you need to **check the documentation to understand what each hyperparameter does**: https://stable-baselines3.readthedocs.io/en/master/modules/dqn.html#parameters
|
||||
As a good practice, you need to **check the documentation to understand what each hyperparameters does**: https://stable-baselines3.readthedocs.io/en/master/modules/dqn.html#parameters
|
||||
|
||||
|
||||
|
||||
2. We run `train.py` and save the models on `logs` folder 📁
|
||||
2. We start the training and save the models on `logs` folder 📁
|
||||
|
||||
- Define the algorithm after `--algo`, where we save the model after `-f` and where the hyperparameter config is after `-c`.
|
||||
|
||||
```bash
|
||||
python train.py --algo ________ --env SpaceInvadersNoFrameskip-v4 -f _________
|
||||
python -m rl_zoo3.train --algo ________ --env SpaceInvadersNoFrameskip-v4 -f _________ -c _________
|
||||
```
|
||||
|
||||
#### Solution
|
||||
|
||||
```bash
|
||||
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
|
||||
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -c dqn.yml
|
||||
```
|
||||
|
||||
## Let's evaluate our agent 👀
|
||||
|
||||
- RL-Baselines3-Zoo provides `enjoy.py`, a python script to evaluate our agent. In most RL libraries, we call the evaluation script `enjoy.py`.
|
||||
- Let's evaluate it for 5000 timesteps 🔥
|
||||
|
||||
```bash
|
||||
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 --no-render --n-timesteps _________ --folder logs/
|
||||
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 --no-render --n-timesteps _________ --folder logs/
|
||||
```
|
||||
|
||||
#### Solution
|
||||
|
||||
```bash
|
||||
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 --no-render --n-timesteps 5000 --folder logs/
|
||||
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 --no-render --n-timesteps 5000 --folder logs/
|
||||
```
|
||||
|
||||
## Publish our trained model on the Hub 🚀
|
||||
Now that we saw we got good results after the training, we can publish our trained model to the Hub with one line of code.
|
||||
Now that we saw we got good results after the training, we can publish our trained model on the hub 🤗 with one line of code.
|
||||
|
||||
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/unit3/space-invaders-model.gif" alt="Space Invaders model">
|
||||
|
||||
By using `rl_zoo3.push_to_hub.py`, **you evaluate, record a replay, generate a model card of your agent, and push it to the Hub**.
|
||||
By using `rl_zoo3.push_to_hub` **you evaluate, record a replay, generate a model card of your agent and push it to the hub**.
|
||||
|
||||
This way:
|
||||
- You can **showcase your work** 🔥
|
||||
- You can **showcase our work** 🔥
|
||||
- You can **visualize your agent playing** 👀
|
||||
- You can **share an agent with the community that others can use** 💾
|
||||
- You can **share with the community an agent that others can use** 💾
|
||||
- You can **access a leaderboard 🏆 to see how well your agent is performing compared to your classmates** 👉 https://huggingface.co/spaces/huggingface-projects/Deep-Reinforcement-Learning-Leaderboard
|
||||
|
||||
To be able to share your model with the community, there are three more steps to follow:
|
||||
To be able to share your model with the community there are three more steps to follow:
|
||||
|
||||
1️⃣ (If it's not already done) create an account in HF ➡ https://huggingface.co/join
|
||||
1️⃣ (If it's not already done) create an account to HF ➡ https://huggingface.co/join
|
||||
|
||||
2️⃣ Sign in and then, you need to store your authentication token from the Hugging Face website.
|
||||
- Create a new token (https://huggingface.co/settings/tokens) **with write role**
|
||||
@@ -209,20 +234,23 @@ To be able to share your model with the community, there are three more steps to
|
||||
- Copy the token
|
||||
- Run the cell below and past the token
|
||||
|
||||
```python
|
||||
```bash
|
||||
from huggingface_hub import notebook_login # To log to our Hugging Face account to be able to upload models to the Hub.
|
||||
notebook_login()
|
||||
git config --global credential.helper store
|
||||
!git config --global credential.helper store
|
||||
```
|
||||
|
||||
If you don't want to use Google Colab or a Jupyter Notebook, you need to use this command instead: `huggingface-cli login`
|
||||
If you don't want to use a Google Colab or a Jupyter Notebook, you need to use this command instead: `huggingface-cli login`
|
||||
|
||||
3️⃣ We're now ready to push our trained agent to the Hub 🔥
|
||||
3️⃣ We're now ready to push our trained agent to the 🤗 Hub 🔥
|
||||
|
||||
Let's run the `push_to_hub.py` file to upload our trained agent to the Hub. There are two important parameters:
|
||||
Let's run push_to_hub.py file to upload our trained agent to the Hub.
|
||||
|
||||
* `--repo-name `: The name of the repo
|
||||
* `-orga`: Your Hugging Face username
|
||||
`--repo-name `: The name of the repo
|
||||
|
||||
`-orga`: Your Hugging Face username
|
||||
|
||||
`-f`: Where the trained model folder is (in our case `logs`)
|
||||
|
||||
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/unit3/select-id.png" alt="Select Id">
|
||||
|
||||
@@ -236,6 +264,8 @@ python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 --
|
||||
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 --repo-name dqn-SpaceInvadersNoFrameskip-v4 -orga ThomasSimonini -f logs/
|
||||
```
|
||||
|
||||
###.
|
||||
|
||||
Congrats 🥳 you've just trained and uploaded your first Deep Q-Learning agent using RL-Baselines-3 Zoo. The script above should have displayed a link to a model repository such as https://huggingface.co/ThomasSimonini/dqn-SpaceInvadersNoFrameskip-v4. When you go to this link, you can:
|
||||
|
||||
- See a **video preview of your agent** at the right.
|
||||
@@ -249,7 +279,7 @@ Under the hood, the Hub uses git-based repositories (don't worry if you don't kn
|
||||
|
||||
## Load a powerful trained model 🔥
|
||||
|
||||
The Stable-Baselines3 team uploaded **more than 150 trained Deep Reinforcement Learning agents on the Hub**. You can download them and use them to see how they perform!
|
||||
- The Stable-Baselines3 team uploaded **more than 150 trained Deep Reinforcement Learning agents on the Hub**.
|
||||
|
||||
You can find them here: 👉 https://huggingface.co/sb3
|
||||
|
||||
@@ -261,10 +291,6 @@ Some examples:
|
||||
|
||||
Let's load an agent playing Beam Rider: https://huggingface.co/sb3/dqn-BeamRiderNoFrameskip-v4
|
||||
|
||||
```python
|
||||
<video controls autoplay><source src="https://huggingface.co/sb3/dqn-BeamRiderNoFrameskip-v4/resolve/main/replay.mp4" type="video/mp4"></video>
|
||||
```
|
||||
|
||||
1. We download the model using `rl_zoo3.load_from_hub`, and place it in a new folder that we can call `rl_trained`
|
||||
|
||||
```bash
|
||||
@@ -275,19 +301,19 @@ python -m rl_zoo3.load_from_hub --algo dqn --env BeamRiderNoFrameskip-v4 -orga s
|
||||
2. Let's evaluate if for 5000 timesteps
|
||||
|
||||
```bash
|
||||
python enjoy.py --algo dqn --env BeamRiderNoFrameskip-v4 -n 5000 -f rl_trained/
|
||||
python -m rl_zoo3.enjoy --algo dqn --env BeamRiderNoFrameskip-v4 -n 5000 -f rl_trained/ --no-render
|
||||
```
|
||||
|
||||
Why not try training your own **Deep Q-Learning Agent playing BeamRiderNoFrameskip-v4? 🏆.**
|
||||
Why not trying to train your own **Deep Q-Learning Agent playing BeamRiderNoFrameskip-v4? 🏆.**
|
||||
|
||||
If you want to try, check out https://huggingface.co/sb3/dqn-BeamRiderNoFrameskip-v4#hyperparameters. There, **in the model card, you'll find the hyperparameters of the trained agent.**
|
||||
If you want to try, check https://huggingface.co/sb3/dqn-BeamRiderNoFrameskip-v4#hyperparameters **in the model card, you have the hyperparameters of the trained agent.**
|
||||
|
||||
Finding hyperparameters in general can be a daunting task. Fortunately, we'll see in the next bonus Unit how we can **use Optuna for optimizing the Hyperparameters 🔥.**
|
||||
But finding hyperparameters can be a daunting task. Fortunately, we'll see in the next Unit, how we can **use Optuna for optimizing the Hyperparameters 🔥.**
|
||||
|
||||
|
||||
## Some additional challenges 🏆
|
||||
|
||||
The best way to learn **is to try things on your own**!
|
||||
The best way to learn **is to try things by your own**!
|
||||
|
||||
In the [Leaderboard](https://huggingface.co/spaces/huggingface-projects/Deep-Reinforcement-Learning-Leaderboard) you will find your agents. Can you get to the top?
|
||||
|
||||
@@ -297,18 +323,25 @@ Here's a list of environments you can try to train your agent with:
|
||||
- EnduroNoFrameskip-v4
|
||||
- PongNoFrameskip-v4
|
||||
|
||||
Also, **if you want to learn to implement Deep Q-Learning by yourself**, you definitely should look at the CleanRL implementation: https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/dqn_atari.py
|
||||
Also, **if you want to learn to implement Deep Q-Learning by yourself**, you definitely should look at CleanRL implementation: https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/dqn_atari.py
|
||||
|
||||
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit4/atari-envs.gif" alt="Environments"/>
|
||||
|
||||
________________________________________________________________________
|
||||
Congrats on finishing this chapter!
|
||||
|
||||
If you’re still feel confused with all these elements...it's totally normal! **This was the same for me and for all people who study RL.**
|
||||
If you’re still feel confused with all these elements...it's totally normal! **This was the same for me and for all people who studied RL.**
|
||||
|
||||
Take time to really **grasp the material before continuing and try the additional challenges**. It’s important to master these elements and have a solid foundations.
|
||||
Take time to really **grasp the material before continuing and try the additional challenges**. It’s important to master these elements and having a solid foundations.
|
||||
|
||||
In the next unit, **we’re going to learn about [Optuna](https://optuna.org/)**. One of the most critical tasks in Deep Reinforcement Learning is to find a good set of training hyperparameters. And Optuna is a library that helps you to automate the search.
|
||||
In the next unit, **we’re going to learn about [Optuna](https://optuna.org/)**. One of the most critical task in Deep Reinforcement Learning is to find a good set of training hyperparameters. And Optuna is a library that helps you to automate the search.
|
||||
|
||||
|
||||
### This is a course built with you 👷🏿♀️
|
||||
|
||||
Finally, we want to improve and update the course iteratively with your feedback. If you have some, please fill this form 👉 https://forms.gle/3HgA7bEHwAmmLfwh9
|
||||
|
||||
We're constantly trying to improve our tutorials, so **if you find some issues in this notebook**, please [open an issue on the Github Repo](https://github.com/huggingface/deep-rl-class/issues).
|
||||
|
||||
See you on Bonus unit 2! 🔥
|
||||
|
||||
|
||||
Reference in New Issue
Block a user