diff --git a/notebooks/unit3.ipynb b/notebooks/unit3.ipynb deleted file mode 100644 index f9eee5e..0000000 --- a/notebooks/unit3.ipynb +++ /dev/null @@ -1,833 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": { - "id": "view-in-github", - "colab_type": "text" - }, - "source": [ - "\"Open" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "k7xBVPzoXxOg" - }, - "source": [ - "# Unit 3: Deep Q-Learning with Atari Games ๐Ÿ‘พ using RL Baselines3 Zoo\n", - "\n", - "\"Unit\n", - "\n", - "In this notebook, **you'll train a Deep Q-Learning agent** playing Space Invaders using [RL Baselines3 Zoo](https://github.com/DLR-RM/rl-baselines3-zoo), a training framework based on [Stable-Baselines3](https://stable-baselines3.readthedocs.io/en/master/) that provides scripts for training, evaluating agents, tuning hyperparameters, plotting results and recording videos.\n", - "\n", - "We're using the [RL-Baselines-3 Zoo integration, a vanilla version of Deep Q-Learning](https://stable-baselines3.readthedocs.io/en/master/modules/dqn.html) with no extensions such as Double-DQN, Dueling-DQN, and Prioritized Experience Replay.\n", - "\n", - "โฌ‡๏ธ Here is an example of what **you will achieve** โฌ‡๏ธ" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "J9S713biXntc" - }, - "outputs": [], - "source": [ - "%%html\n", - "" - ] - }, - { - "cell_type": "markdown", - "source": [ - "### ๐ŸŽฎ Environments: \n", - "\n", - "- [SpacesInvadersNoFrameskip-v4](https://gymnasium.farama.org/environments/atari/space_invaders/)\n", - "\n", - "You can see the difference between Space Invaders versions here ๐Ÿ‘‰ https://gymnasium.farama.org/environments/atari/space_invaders/#variants\n", - "\n", - "### ๐Ÿ“š RL-Library: \n", - "\n", - "- [RL-Baselines3-Zoo](https://github.com/DLR-RM/rl-baselines3-zoo)" - ], - "metadata": { - "id": "ykJiGevCMVc5" - } - }, - { - "cell_type": "markdown", - "metadata": { - "id": "wciHGjrFYz9m" - }, - "source": [ - "## Objectives of this notebook ๐Ÿ†\n", - "At the end of the notebook, you will:\n", - "- Be able to understand deeper **how RL Baselines3 Zoo works**.\n", - "- Be able to **push your trained agent and the code to the Hub** with a nice video replay and an evaluation score ๐Ÿ”ฅ.\n", - "\n", - "\n" - ] - }, - { - "cell_type": "markdown", - "source": [ - "## This notebook is from Deep Reinforcement Learning Course\n", - "\"Deep" - ], - "metadata": { - "id": "TsnP0rjxMn1e" - } - }, - { - "cell_type": "markdown", - "metadata": { - "id": "nw6fJHIAZd-J" - }, - "source": [ - "In this free course, you will:\n", - "\n", - "- ๐Ÿ“– Study Deep Reinforcement Learning in **theory and practice**.\n", - "- ๐Ÿง‘โ€๐Ÿ’ป Learn to **use famous Deep RL libraries** such as Stable Baselines3, RL Baselines3 Zoo, CleanRL and Sample Factory 2.0.\n", - "- ๐Ÿค– Train **agents in unique environments** \n", - "\n", - "And more check ๐Ÿ“š the syllabus ๐Ÿ‘‰ https://simoninithomas.github.io/deep-rl-course\n", - "\n", - "Donโ€™t forget to **sign up to the course** (we are collecting your email to be able toย **send you the links when each Unit is published and give you information about the challenges and updates).**\n", - "\n", - "\n", - "The best way to keep in touch is to join our discord server to exchange with the community and with us ๐Ÿ‘‰๐Ÿป https://discord.gg/ydHrjt3WP5" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "0vgANIBBZg1p" - }, - "source": [ - "## Prerequisites ๐Ÿ—๏ธ\n", - "Before diving into the notebook, you need to:\n", - "\n", - "๐Ÿ”ฒ ๐Ÿ“š **[Study Deep Q-Learning by reading Unit 3](https://huggingface.co/deep-rl-course/unit3/introduction)** ๐Ÿค— " - ] - }, - { - "cell_type": "markdown", - "source": [ - "We're constantly trying to improve our tutorials, so **if you find some issues in this notebook**, please [open an issue on the Github Repo](https://github.com/huggingface/deep-rl-class/issues)." - ], - "metadata": { - "id": "7kszpGFaRVhq" - } - }, - { - "cell_type": "markdown", - "metadata": { - "id": "QR0jZtYreSI5" - }, - "source": [ - "# Let's train a Deep Q-Learning agent playing Atari' Space Invaders ๐Ÿ‘พ and upload it to the Hub.\n", - "\n", - "We strongly recommend students **to use Google Colab for the hands-on exercises instead of running them on their personal computers**.\n", - "\n", - "By using Google Colab, **you can focus on learning and experimenting without worrying about the technical aspects of setting up your environments**.\n", - "\n", - "To validate this hands-on for the certification process, you need to push your trained model to the Hub and **get a result of >= 200**.\n", - "\n", - "To find your result, go to the leaderboard and find your model, **the result = mean_reward - std of reward**\n", - "\n", - "For more information about the certification process, check this section ๐Ÿ‘‰ https://huggingface.co/deep-rl-course/en/unit0/introduction#certification-process" - ] - }, - { - "cell_type": "markdown", - "source": [ - "## An advice ๐Ÿ’ก\n", - "It's better to run this colab in a copy on your Google Drive, so that **if it timeouts** you still have the saved notebook on your Google Drive and do not need to fill everything from scratch.\n", - "\n", - "To do that you can either do `Ctrl + S` or `File > Save a copy in Google Drive.`\n", - "\n", - "Also, we're going to **train it for 90 minutes with 1M timesteps**. By typing `!nvidia-smi` will tell you what GPU you're using.\n", - "\n", - "And if you want to train more such 10 million steps, this will take about 9 hours, potentially resulting in Colab timing out. In that case, I recommend running this on your local computer (or somewhere else). Just click on: `File>Download`. " - ], - "metadata": { - "id": "Nc8BnyVEc3Ys" - } - }, - { - "cell_type": "markdown", - "source": [ - "## Set the GPU ๐Ÿ’ช\n", - "- To **accelerate the agent's training, we'll use a GPU**. To do that, go to `Runtime > Change Runtime type`\n", - "\n", - "\"GPU" - ], - "metadata": { - "id": "PU4FVzaoM6fC" - } - }, - { - "cell_type": "markdown", - "source": [ - "- `Hardware Accelerator > GPU`\n", - "\n", - "\"GPU" - ], - "metadata": { - "id": "KV0NyFdQM9ZG" - } - }, - { - "cell_type": "markdown", - "source": [ - "# Install RL-Baselines3 Zoo and its dependencies ๐Ÿ“š\n", - "\n", - "If you see `ERROR: pip's dependency resolver does not currently take into account all the packages that are installed.` **this is normal and it's not a critical error** there's a conflict of version. But the packages we need are installed." - ], - "metadata": { - "id": "wS_cVefO-aYg" - } - }, - { - "cell_type": "code", - "source": [ - "# For now we install this update of RL-Baselines3 Zoo\n", - "!pip install git+https://github.com/DLR-RM/rl-baselines3-zoo@update/hf" - ], - "metadata": { - "id": "hLTwHqIWdnPb" - }, - "execution_count": null, - "outputs": [] - }, - { - "cell_type": "markdown", - "source": [ - "IF AND ONLY IF THE VERSION ABOVE DOES NOT EXIST ANYMORE. UNCOMMENT AND INSTALL THE ONE BELOW" - ], - "metadata": { - "id": "p0xe2sJHdtHy" - } - }, - { - "cell_type": "code", - "source": [ - "#!pip install rl_zoo3==2.0.0a9" - ], - "metadata": { - "id": "N0d6wy-F-f39" - }, - "execution_count": null, - "outputs": [] - }, - { - "cell_type": "code", - "source": [ - "!apt-get install swig cmake ffmpeg" - ], - "metadata": { - "id": "8_MllY6Om1eI" - }, - "execution_count": null, - "outputs": [] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "4S9mJiKg6SqC" - }, - "source": [ - "To be able to use Atari games in Gymnasium we need to install atari package. And accept-rom-license to download the rom files (games files)." - ] - }, - { - "cell_type": "code", - "source": [ - "!pip install gymnasium[atari]\n", - "!pip install gymnasium[accept-rom-license]" - ], - "metadata": { - "id": "NsRP-lX1_2fC" - }, - "execution_count": null, - "outputs": [] - }, - { - "cell_type": "markdown", - "source": [ - "## Create a virtual display ๐Ÿ”ฝ\n", - "\n", - "During the notebook, we'll need to generate a replay video. To do so, with colab, **we need to have a virtual screen to be able to render the environment** (and thus record the frames). \n", - "\n", - "Hence the following cell will install the librairies and create and run a virtual screen ๐Ÿ–ฅ" - ], - "metadata": { - "id": "bTpYcVZVMzUI" - } - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "jV6wjQ7Be7p5" - }, - "outputs": [], - "source": [ - "%%capture\n", - "!apt install python-opengl\n", - "!apt install ffmpeg\n", - "!apt install xvfb\n", - "!pip3 install pyvirtualdisplay" - ] - }, - { - "cell_type": "code", - "source": [ - "# Virtual display\n", - "from pyvirtualdisplay import Display\n", - "\n", - "virtual_display = Display(visible=0, size=(1400, 900))\n", - "virtual_display.start()" - ], - "metadata": { - "id": "BE5JWP5rQIKf" - }, - "execution_count": null, - "outputs": [] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "5iPgzluo9z-u" - }, - "source": [ - "## Train our Deep Q-Learning Agent to Play Space Invaders ๐Ÿ‘พ\n", - "\n", - "To train an agent with RL-Baselines3-Zoo, we just need to do two things:\n", - "\n", - "1. Create a hyperparameter config file that will contain our training hyperparameters called `dqn.yml`.\n", - "\n", - "This is a template example:\n", - "\n", - "```\n", - "SpaceInvadersNoFrameskip-v4:\n", - " env_wrapper:\n", - " - stable_baselines3.common.atari_wrappers.AtariWrapper\n", - " frame_stack: 4\n", - " policy: 'CnnPolicy'\n", - " n_timesteps: !!float 1e7\n", - " buffer_size: 100000\n", - " learning_rate: !!float 1e-4\n", - " batch_size: 32\n", - " learning_starts: 100000\n", - " target_update_interval: 1000\n", - " train_freq: 4\n", - " gradient_steps: 1\n", - " exploration_fraction: 0.1\n", - " exploration_final_eps: 0.01\n", - " # If True, you need to deactivate handle_timeout_termination\n", - " # in the replay_buffer_kwargs\n", - " optimize_memory_usage: False\n", - "```" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "_VjblFSVDQOj" - }, - "source": [ - "Here we see that:\n", - "- We use the `Atari Wrapper` that preprocess the input (Frame reduction ,grayscale, stack 4 frames)\n", - "- We use `CnnPolicy`, since we use Convolutional layers to process the frames\n", - "- We train it for 10 million `n_timesteps` \n", - "- Memory (Experience Replay) size is 100000, aka the amount of experience steps you saved to train again your agent with.\n", - "\n", - "๐Ÿ’ก My advice is to **reduce the training timesteps to 1M,** which will take about 90 minutes on a P100. `!nvidia-smi` will tell you what GPU you're using. At 10 million steps, this will take about 9 hours, which could likely result in Colab timing out. I recommend running this on your local computer (or somewhere else). Just click on: `File>Download`. " - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "5qTkbWrkECOJ" - }, - "source": [ - "In terms of hyperparameters optimization, my advice is to focus on these 3 hyperparameters:\n", - "- `learning_rate`\n", - "- `buffer_size (Experience Memory size)`\n", - "- `batch_size`\n", - "\n", - "As a good practice, you need to **check the documentation to understand what each hyperparameters does**: https://stable-baselines3.readthedocs.io/en/master/modules/dqn.html#parameters\n", - "\n" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "Hn8bRTHvERRL" - }, - "source": [ - "2. We start the training and save the models on `logs` folder ๐Ÿ“\n", - "\n", - "- Define the algorithm after `--algo`, where we save the model after `-f` and where the hyperparameter config is after `-c`." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "Xr1TVW4xfbz3" - }, - "outputs": [], - "source": [ - "!python -m rl_zoo3.train --algo ________ --env SpaceInvadersNoFrameskip-v4 -f _________ -c _________" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "SeChoX-3SZfP" - }, - "source": [ - "#### Solution" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "PuocgdokSab9" - }, - "outputs": [], - "source": [ - "!python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -c dqn.yml" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "_dLomIiMKQaf" - }, - "source": [ - "## Let's evaluate our agent ๐Ÿ‘€\n", - "- RL-Baselines3-Zoo provides `enjoy.py`, a python script to evaluate our agent. In most RL libraries, we call the evaluation script `enjoy.py`.\n", - "- Let's evaluate it for 5000 timesteps ๐Ÿ”ฅ" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "co5um_KeKbBJ" - }, - "outputs": [], - "source": [ - "!python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 --no-render --n-timesteps _________ --folder logs/ " - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "Q24K1tyWSj7t" - }, - "source": [ - "#### Solution" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "P_uSmwGRSk0z" - }, - "outputs": [], - "source": [ - "!python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 --no-render --n-timesteps 5000 --folder logs/" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "liBeTltiHJtr" - }, - "source": [ - "## Publish our trained model on the Hub ๐Ÿš€\n", - "Now that we saw we got good results after the training, we can publish our trained model on the hub ๐Ÿค— with one line of code.\n", - "\n", - "\"Space" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "ezbHS1q3HYVV" - }, - "source": [ - "By using `rl_zoo3.push_to_hub` **you evaluate, record a replay, generate a model card of your agent and push it to the hub**.\n", - "\n", - "This way:\n", - "- You can **showcase our work** ๐Ÿ”ฅ\n", - "- You can **visualize your agent playing** ๐Ÿ‘€\n", - "- You can **share with the community an agent that others can use** ๐Ÿ’พ\n", - "- You can **access a leaderboard ๐Ÿ† to see how well your agent is performing compared to your classmates** ๐Ÿ‘‰ https://huggingface.co/spaces/huggingface-projects/Deep-Reinforcement-Learning-Leaderboard" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "XMSeZRBiHk6X" - }, - "source": [ - "To be able to share your model with the community there are three more steps to follow:\n", - "\n", - "1๏ธโƒฃ (If it's not already done) create an account to HF โžก https://huggingface.co/join\n", - "\n", - "2๏ธโƒฃ Sign in and then, you need to store your authentication token from the Hugging Face website.\n", - "- Create a new token (https://huggingface.co/settings/tokens) **with write role**\n", - "\n", - "\"Create" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "9O6FI0F8HnzE" - }, - "source": [ - "- Copy the token \n", - "- Run the cell below and past the token" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "Ppu9yePwHrZX" - }, - "outputs": [], - "source": [ - "from huggingface_hub import notebook_login # To log to our Hugging Face account to be able to upload models to the Hub.\n", - "notebook_login()\n", - "!git config --global credential.helper store" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "2RVEdunPHs8B" - }, - "source": [ - "If you don't want to use a Google Colab or a Jupyter Notebook, you need to use this command instead: `huggingface-cli login`" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "dSLwdmvhHvjw" - }, - "source": [ - "3๏ธโƒฃ We're now ready to push our trained agent to the ๐Ÿค— Hub ๐Ÿ”ฅ" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "PW436XnhHw1H" - }, - "source": [ - "Let's run push_to_hub.py file to upload our trained agent to the Hub.\n", - "\n", - "`--repo-name `: The name of the repo\n", - "\n", - "`-orga`: Your Hugging Face username\n", - "\n", - "`-f`: Where the trained model folder is (in our case `logs`)\n", - "\n", - "\"Select" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "Ygk2sEktTDEw" - }, - "outputs": [], - "source": [ - "!python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 --repo-name _____________________ -orga _____________________ -f logs/" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "otgpa0rhS9wR" - }, - "source": [ - "#### Solution" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "_HQNlAXuEhci" - }, - "outputs": [], - "source": [ - "!python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 --repo-name dqn-SpaceInvadersNoFrameskip-v4 -orga ThomasSimonini -f logs/" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "0D4F5zsTTJ-L" - }, - "source": [ - "###." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "ff89kd2HL1_s" - }, - "source": [ - "Congrats ๐Ÿฅณ you've just trained and uploaded your first Deep Q-Learning agent using RL-Baselines-3 Zoo. The script above should have displayed a link to a model repository such as https://huggingface.co/ThomasSimonini/dqn-SpaceInvadersNoFrameskip-v4. When you go to this link, you can:\n", - "\n", - "- See a **video preview of your agent** at the right. \n", - "- Click \"Files and versions\" to see all the files in the repository.\n", - "- Click \"Use in stable-baselines3\" to get a code snippet that shows how to load the model.\n", - "- A model card (`README.md` file) which gives a description of the model and the hyperparameters you used.\n", - "\n", - "Under the hood, the Hub uses git-based repositories (don't worry if you don't know what git is), which means you can update the model with new versions as you experiment and improve your agent.\n", - "\n", - "**Compare the results of your agents with your classmates** using the [leaderboard](https://huggingface.co/spaces/huggingface-projects/Deep-Reinforcement-Learning-Leaderboard) ๐Ÿ†" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "fyRKcCYY-dIo" - }, - "source": [ - "## Load a powerful trained model ๐Ÿ”ฅ\n", - "- The Stable-Baselines3 team uploaded **more than 150 trained Deep Reinforcement Learning agents on the Hub**.\n", - "\n", - "You can find them here: ๐Ÿ‘‰ https://huggingface.co/sb3\n", - "\n", - "Some examples:\n", - "- Asteroids: https://huggingface.co/sb3/dqn-AsteroidsNoFrameskip-v4\n", - "- Beam Rider: https://huggingface.co/sb3/dqn-BeamRiderNoFrameskip-v4\n", - "- Breakout: https://huggingface.co/sb3/dqn-BreakoutNoFrameskip-v4\n", - "- Road Runner: https://huggingface.co/sb3/dqn-RoadRunnerNoFrameskip-v4\n", - "\n", - "Let's load an agent playing Beam Rider: https://huggingface.co/sb3/dqn-BeamRiderNoFrameskip-v4" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "B-9QVFIROI5Y" - }, - "outputs": [], - "source": [ - "%%html\n", - "" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "7ZQNY_r6NJtC" - }, - "source": [ - "1. We download the model using `rl_zoo3.load_from_hub`, and place it in a new folder that we can call `rl_trained`" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "OdBNZHy0NGTR" - }, - "outputs": [], - "source": [ - "# Download model and save it into the logs/ folder\n", - "!python -m rl_zoo3.load_from_hub --algo dqn --env BeamRiderNoFrameskip-v4 -orga sb3 -f rl_trained/" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "LFt6hmWsNdBo" - }, - "source": [ - "2. Let's evaluate if for 5000 timesteps" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "aOxs0rNuN0uS" - }, - "outputs": [], - "source": [ - "!python -m rl_zoo3.enjoy --algo dqn --env BeamRiderNoFrameskip-v4 -n 5000 -f rl_trained/ --no-render" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "kxMDuDfPON57" - }, - "source": [ - "Why not trying to train your own **Deep Q-Learning Agent playing BeamRiderNoFrameskip-v4? ๐Ÿ†.**\n", - "\n", - "If you want to try, check https://huggingface.co/sb3/dqn-BeamRiderNoFrameskip-v4#hyperparameters **in the model card, you have the hyperparameters of the trained agent.**" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "xL_ZtUgpOuY6" - }, - "source": [ - "But finding hyperparameters can be a daunting task. Fortunately, we'll see in the next Unit, how we can **use Optuna for optimizing the Hyperparameters ๐Ÿ”ฅ.**\n" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "-pqaco8W-huW" - }, - "source": [ - "## Some additional challenges ๐Ÿ†\n", - "The best way to learn **is to try things by your own**!\n", - "\n", - "In the [Leaderboard](https://huggingface.co/spaces/huggingface-projects/Deep-Reinforcement-Learning-Leaderboard) you will find your agents. Can you get to the top?\n", - "\n", - "Here's a list of environments you can try to train your agent with:\n", - "- BeamRiderNoFrameskip-v4\n", - "- BreakoutNoFrameskip-v4 \n", - "- EnduroNoFrameskip-v4\n", - "- PongNoFrameskip-v4\n", - "\n", - "Also, **if you want to learn to implement Deep Q-Learning by yourself**, you definitely should look at CleanRL implementation: https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/dqn_atari.py\n", - "\n", - "\"Environments\"/" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "paS-XKo4-kmu" - }, - "source": [ - "________________________________________________________________________\n", - "Congrats on finishing this chapter!\n", - "\n", - "If youโ€™re still feel confused with all these elements...it's totally normal! **This was the same for me and for all people who studied RL.**\n", - "\n", - "Take time to really **grasp the material before continuing and try the additional challenges**. Itโ€™s important to master these elements and having a solid foundations.\n", - "\n", - "In the next unit, **weโ€™re going to learn about [Optuna](https://optuna.org/)**. One of the most critical task in Deep Reinforcement Learning is to find a good set of training hyperparameters. And Optuna is a library that helps you to automate the search.\n", - "\n", - "\n" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "5WRx7tO7-mvC" - }, - "source": [ - "\n", - "\n", - "### This is a course built with you ๐Ÿ‘ท๐Ÿฟโ€โ™€๏ธ\n", - "\n", - "Finally, we want to improve and update the course iteratively with your feedback. If you have some, please fill this form ๐Ÿ‘‰ https://forms.gle/3HgA7bEHwAmmLfwh9\n", - "\n", - "We're constantly trying to improve our tutorials, so **if you find some issues in this notebook**, please [open an issue on the Github Repo](https://github.com/huggingface/deep-rl-class/issues)." - ] - }, - { - "cell_type": "markdown", - "source": [ - "See you on Bonus unit 2! ๐Ÿ”ฅ " - ], - "metadata": { - "id": "Kc3udPT-RcXc" - } - }, - { - "cell_type": "markdown", - "metadata": { - "id": "fS3Xerx0fIMV" - }, - "source": [ - "### Keep Learning, Stay Awesome ๐Ÿค—" - ] - } - ], - "metadata": { - "colab": { - "private_outputs": true, - "provenance": [], - "include_colab_link": true - }, - "kernelspec": { - "display_name": "Python 3 (ipykernel)", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.10.6" - }, - "varInspector": { - "cols": { - "lenName": 16, - "lenType": 16, - "lenVar": 40 - }, - "kernels_config": { - "python": { - "delete_cmd_postfix": "", - "delete_cmd_prefix": "del ", - "library": "var_list.py", - "varRefreshCmd": "print(var_dic_list())" - }, - "r": { - "delete_cmd_postfix": ") ", - "delete_cmd_prefix": "rm(", - "library": "var_list.r", - "varRefreshCmd": "cat(var_dic_list()) " - } - }, - "types_to_exclude": [ - "module", - "function", - "builtin_function_or_method", - "instance", - "_Feature" - ], - "window_display": false - }, - "accelerator": "GPU", - "gpuClass": "standard" - }, - "nbformat": 4, - "nbformat_minor": 0 -} \ No newline at end of file diff --git a/notebooks/unit3/unit3.ipynb b/notebooks/unit3/unit3.ipynb index 2252762..f9eee5e 100644 --- a/notebooks/unit3/unit3.ipynb +++ b/notebooks/unit3/unit3.ipynb @@ -7,7 +7,7 @@ "colab_type": "text" }, "source": [ - "\"Open" + "\"Open" ] }, { @@ -44,7 +44,9 @@ "source": [ "### ๐ŸŽฎ Environments: \n", "\n", - "- SpacesInvadersNoFrameskip-v4 \n", + "- [SpacesInvadersNoFrameskip-v4](https://gymnasium.farama.org/environments/atari/space_invaders/)\n", + "\n", + "You can see the difference between Space Invaders versions here ๐Ÿ‘‰ https://gymnasium.farama.org/environments/atari/space_invaders/#variants\n", "\n", "### ๐Ÿ“š RL-Library: \n", "\n", @@ -127,6 +129,10 @@ "source": [ "# Let's train a Deep Q-Learning agent playing Atari' Space Invaders ๐Ÿ‘พ and upload it to the Hub.\n", "\n", + "We strongly recommend students **to use Google Colab for the hands-on exercises instead of running them on their personal computers**.\n", + "\n", + "By using Google Colab, **you can focus on learning and experimenting without worrying about the technical aspects of setting up your environments**.\n", + "\n", "To validate this hands-on for the certification process, you need to push your trained model to the Hub and **get a result of >= 200**.\n", "\n", "To find your result, go to the leaderboard and find your model, **the result = mean_reward - std of reward**\n", @@ -173,6 +179,81 @@ "id": "KV0NyFdQM9ZG" } }, + { + "cell_type": "markdown", + "source": [ + "# Install RL-Baselines3 Zoo and its dependencies ๐Ÿ“š\n", + "\n", + "If you see `ERROR: pip's dependency resolver does not currently take into account all the packages that are installed.` **this is normal and it's not a critical error** there's a conflict of version. But the packages we need are installed." + ], + "metadata": { + "id": "wS_cVefO-aYg" + } + }, + { + "cell_type": "code", + "source": [ + "# For now we install this update of RL-Baselines3 Zoo\n", + "!pip install git+https://github.com/DLR-RM/rl-baselines3-zoo@update/hf" + ], + "metadata": { + "id": "hLTwHqIWdnPb" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "source": [ + "IF AND ONLY IF THE VERSION ABOVE DOES NOT EXIST ANYMORE. UNCOMMENT AND INSTALL THE ONE BELOW" + ], + "metadata": { + "id": "p0xe2sJHdtHy" + } + }, + { + "cell_type": "code", + "source": [ + "#!pip install rl_zoo3==2.0.0a9" + ], + "metadata": { + "id": "N0d6wy-F-f39" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "source": [ + "!apt-get install swig cmake ffmpeg" + ], + "metadata": { + "id": "8_MllY6Om1eI" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "4S9mJiKg6SqC" + }, + "source": [ + "To be able to use Atari games in Gymnasium we need to install atari package. And accept-rom-license to download the rom files (games files)." + ] + }, + { + "cell_type": "code", + "source": [ + "!pip install gymnasium[atari]\n", + "!pip install gymnasium[accept-rom-license]" + ], + "metadata": { + "id": "NsRP-lX1_2fC" + }, + "execution_count": null, + "outputs": [] + }, { "cell_type": "markdown", "source": [ @@ -201,29 +282,6 @@ "!pip3 install pyvirtualdisplay" ] }, - { - "cell_type": "code", - "source": [ - "# Additional dependencies for RL Baselines3 Zoo\n", - "!apt-get install swig cmake freeglut3-dev " - ], - "metadata": { - "id": "fWyKJCy_NJBX" - }, - "execution_count": null, - "outputs": [] - }, - { - "cell_type": "code", - "source": [ - "!pip install pyglet==1.5.1" - ], - "metadata": { - "id": "C5LwHrISW7Q5" - }, - "execution_count": null, - "outputs": [] - }, { "cell_type": "code", "source": [ @@ -234,68 +292,11 @@ "virtual_display.start()" ], "metadata": { - "id": "ww5PQH1gNLI4" + "id": "BE5JWP5rQIKf" }, "execution_count": null, "outputs": [] }, - { - "cell_type": "markdown", - "metadata": { - "id": "mYIMvl5X9NAu" - }, - "source": [ - "## Clone RL-Baselines3 Zoo Repo ๐Ÿ“š\n", - "You can now directly install from python package `pip install rl_zoo3` but since we want **the full installation with extra environments and dependencies** we're going to clone `RL-Baselines3-Zoo` repository and install from source." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "eu5ZDPZ09VNQ" - }, - "outputs": [], - "source": [ - "!git clone https://github.com/DLR-RM/rl-baselines3-zoo" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "HCIoSbvbfAQh" - }, - "source": [ - "## Install dependencies ๐Ÿ”ฝ\n", - "We can now install the dependencies RL-Baselines3 Zoo needs (this can take 5min โฒ)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "s2QsFAk29h-D" - }, - "outputs": [], - "source": [ - "%cd /content/rl-baselines3-zoo/ \n", - "!git checkout v1.8.0" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "3QaOS7Xj9j1s" - }, - "outputs": [], - "source": [ - "!pip install setuptools==65.5.0\n", - "!pip install -r requirements.txt\n", - "# Since colab uses Python 3.9 we need to add this installation\n", - "!pip install gym[atari,accept-rom-license]==0.21.0" - ] - }, { "cell_type": "markdown", "metadata": { @@ -305,9 +306,31 @@ "## Train our Deep Q-Learning Agent to Play Space Invaders ๐Ÿ‘พ\n", "\n", "To train an agent with RL-Baselines3-Zoo, we just need to do two things:\n", - "1. We define the hyperparameters in `/content/rl-baselines3-zoo/hyperparams/dqn.yml`\n", "\n", - "\"DQN\n" + "1. Create a hyperparameter config file that will contain our training hyperparameters called `dqn.yml`.\n", + "\n", + "This is a template example:\n", + "\n", + "```\n", + "SpaceInvadersNoFrameskip-v4:\n", + " env_wrapper:\n", + " - stable_baselines3.common.atari_wrappers.AtariWrapper\n", + " frame_stack: 4\n", + " policy: 'CnnPolicy'\n", + " n_timesteps: !!float 1e7\n", + " buffer_size: 100000\n", + " learning_rate: !!float 1e-4\n", + " batch_size: 32\n", + " learning_starts: 100000\n", + " target_update_interval: 1000\n", + " train_freq: 4\n", + " gradient_steps: 1\n", + " exploration_fraction: 0.1\n", + " exploration_final_eps: 0.01\n", + " # If True, you need to deactivate handle_timeout_termination\n", + " # in the replay_buffer_kwargs\n", + " optimize_memory_usage: False\n", + "```" ] }, { @@ -346,7 +369,9 @@ "id": "Hn8bRTHvERRL" }, "source": [ - "2. We run `train.py` and save the models on `logs` folder ๐Ÿ“" + "2. We start the training and save the models on `logs` folder ๐Ÿ“\n", + "\n", + "- Define the algorithm after `--algo`, where we save the model after `-f` and where the hyperparameter config is after `-c`." ] }, { @@ -357,7 +382,7 @@ }, "outputs": [], "source": [ - "!python train.py --algo ________ --env SpaceInvadersNoFrameskip-v4 -f _________" + "!python -m rl_zoo3.train --algo ________ --env SpaceInvadersNoFrameskip-v4 -f _________ -c _________" ] }, { @@ -377,7 +402,7 @@ }, "outputs": [], "source": [ - "!python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/" + "!python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -c dqn.yml" ] }, { @@ -399,7 +424,7 @@ }, "outputs": [], "source": [ - "!python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 --no-render --n-timesteps _________ --folder logs/" + "!python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 --no-render --n-timesteps _________ --folder logs/ " ] }, { @@ -419,7 +444,7 @@ }, "outputs": [], "source": [ - "!python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 --no-render --n-timesteps 5000 --folder logs/" + "!python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 --no-render --n-timesteps 5000 --folder logs/" ] }, { @@ -440,7 +465,7 @@ "id": "ezbHS1q3HYVV" }, "source": [ - "By using `rl_zoo3.push_to_hub.py` **you evaluate, record a replay, generate a model card of your agent and push it to the hub**.\n", + "By using `rl_zoo3.push_to_hub` **you evaluate, record a replay, generate a model card of your agent and push it to the hub**.\n", "\n", "This way:\n", "- You can **showcase our work** ๐Ÿ”ฅ\n", @@ -518,6 +543,8 @@ "\n", "`-orga`: Your Hugging Face username\n", "\n", + "`-f`: Where the trained model folder is (in our case `logs`)\n", + "\n", "\"Select" ] }, @@ -649,7 +676,7 @@ }, "outputs": [], "source": [ - "!python enjoy.py --algo dqn --env BeamRiderNoFrameskip-v4 -n 5000 -f rl_trained/" + "!python -m rl_zoo3.enjoy --algo dqn --env BeamRiderNoFrameskip-v4 -n 5000 -f rl_trained/ --no-render" ] }, { @@ -803,4 +830,4 @@ }, "nbformat": 4, "nbformat_minor": 0 -} +} \ No newline at end of file diff --git a/notebooks/unit8/unit8_part1.ipynb b/notebooks/unit8/unit8_part1.ipynb index 60a2e58..653385b 100644 --- a/notebooks/unit8/unit8_part1.ipynb +++ b/notebooks/unit8/unit8_part1.ipynb @@ -7,7 +7,7 @@ "colab_type": "text" }, "source": [ - "\"Open" + "\"Open" ] }, { @@ -156,6 +156,17 @@ "id": "bTpYcVZVMzUI" } }, + { + "cell_type": "code", + "source": [ + "!pip install setuptools==65.5.0" + ], + "metadata": { + "id": "Fd731S8-NuJA" + }, + "execution_count": null, + "outputs": [] + }, { "cell_type": "code", "execution_count": null, @@ -188,17 +199,6 @@ "execution_count": null, "outputs": [] }, - { - "cell_type": "code", - "source": [ - "!pip install setuptools==65.5.0" - ], - "metadata": { - "id": "Fd731S8-NuJA" - }, - "execution_count": null, - "outputs": [] - }, { "cell_type": "markdown", "metadata": { @@ -206,16 +206,16 @@ }, "source": [ "## Install dependencies ๐Ÿ”ฝ\n", - "For this exercise, we use `gym==0.21` because the video was recorded using Gym.\n" + "For this exercise, we use `gym==0.22`." ] }, { "cell_type": "code", "source": [ - "!pip install gym==0.21\n", + "!pip install gym==0.22\n", "!pip install imageio-ffmpeg\n", "!pip install huggingface_hub\n", - "!pip install box2d" + "!pip install gym[box2d]==0.22" ], "metadata": { "id": "9xZQFTPcsKUK" @@ -1353,6 +1353,7 @@ "colab": { "private_outputs": true, "provenance": [], + "history_visible": true, "include_colab_link": true }, "gpuClass": "standard", @@ -1367,4 +1368,4 @@ }, "nbformat": 4, "nbformat_minor": 0 -} +} \ No newline at end of file diff --git a/notebooks/unit8_part1.ipynb b/notebooks/unit8_part1.ipynb deleted file mode 100644 index 653385b..0000000 --- a/notebooks/unit8_part1.ipynb +++ /dev/null @@ -1,1371 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": { - "id": "view-in-github", - "colab_type": "text" - }, - "source": [ - "\"Open" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "-cf5-oDPjwf8" - }, - "source": [ - "# Unit 8: Proximal Policy Gradient (PPO) with PyTorch ๐Ÿค–\n", - "\n", - "\"Unit\n", - "\n", - "\n", - "In this notebook, you'll learn to **code your PPO agent from scratch with PyTorch using CleanRL implementation as model**.\n", - "\n", - "To test its robustness, we're going to train it in:\n", - "\n", - "- [LunarLander-v2 ๐Ÿš€](https://www.gymlibrary.dev/environments/box2d/lunar_lander/)\n" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "2Fl6Rxt0lc0O" - }, - "source": [ - "โฌ‡๏ธ Here is an example of what you will achieve. โฌ‡๏ธ" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "DbKfCj5ilgqT" - }, - "outputs": [], - "source": [ - "%%html\n", - "" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "YcOFdWpnlxNf" - }, - "source": [ - "We're constantly trying to improve our tutorials, so **if you find some issues in this notebook**, please [open an issue on the GitHub Repo](https://github.com/huggingface/deep-rl-class/issues)." - ] - }, - { - "cell_type": "markdown", - "source": [ - "## Objectives of this notebook ๐Ÿ†\n", - "\n", - "At the end of the notebook, you will:\n", - "\n", - "- Be able to **code your PPO agent from scratch using PyTorch**.\n", - "- Be able to **push your trained agent and the code to the Hub** with a nice video replay and an evaluation score ๐Ÿ”ฅ.\n", - "\n", - "\n" - ], - "metadata": { - "id": "T6lIPYFghhYL" - } - }, - { - "cell_type": "markdown", - "source": [ - "## This notebook is from the Deep Reinforcement Learning Course\n", - "\"Deep\n", - "\n", - "In this free course, you will:\n", - "\n", - "- ๐Ÿ“– Study Deep Reinforcement Learning in **theory and practice**.\n", - "- ๐Ÿง‘โ€๐Ÿ’ป Learn to **use famous Deep RL libraries** such as Stable Baselines3, RL Baselines3 Zoo, CleanRL and Sample Factory 2.0.\n", - "- ๐Ÿค– Train **agents in unique environments** \n", - "\n", - "Donโ€™t forget to **sign up to the course** (we are collecting your email to be able toย **send you the links when each Unit is published and give you information about the challenges and updates).**\n", - "\n", - "\n", - "The best way to keep in touch is to join our discord server to exchange with the community and with us ๐Ÿ‘‰๐Ÿป https://discord.gg/ydHrjt3WP5" - ], - "metadata": { - "id": "Wp-rD6Fuhq31" - } - }, - { - "cell_type": "markdown", - "source": [ - "## Prerequisites ๐Ÿ—๏ธ\n", - "Before diving into the notebook, you need to:\n", - "\n", - "๐Ÿ”ฒ ๐Ÿ“š Study [PPO by reading Unit 8](https://huggingface.co/deep-rl-course/unit8/introduction) ๐Ÿค— " - ], - "metadata": { - "id": "rasqqGQlhujA" - } - }, - { - "cell_type": "markdown", - "source": [ - "To validate this hands-on for the [certification process](https://huggingface.co/deep-rl-course/en/unit0/introduction#certification-process), you need to push one model, we don't ask for a minimal result but we **advise you to try different hyperparameters settings to get better results**.\n", - "\n", - "If you don't find your model, **go to the bottom of the page and click on the refresh button**\n", - "\n", - "For more information about the certification process, check this section ๐Ÿ‘‰ https://huggingface.co/deep-rl-course/en/unit0/introduction#certification-process" - ], - "metadata": { - "id": "PUFfMGOih3CW" - } - }, - { - "cell_type": "markdown", - "source": [ - "## Set the GPU ๐Ÿ’ช\n", - "- To **accelerate the agent's training, we'll use a GPU**. To do that, go to `Runtime > Change Runtime type`\n", - "\n", - "\"GPU" - ], - "metadata": { - "id": "PU4FVzaoM6fC" - } - }, - { - "cell_type": "markdown", - "source": [ - "- `Hardware Accelerator > GPU`\n", - "\n", - "\"GPU" - ], - "metadata": { - "id": "KV0NyFdQM9ZG" - } - }, - { - "cell_type": "markdown", - "source": [ - "## Create a virtual display ๐Ÿ”ฝ\n", - "\n", - "During the notebook, we'll need to generate a replay video. To do so, with colab, **we need to have a virtual screen to be able to render the environment** (and thus record the frames). \n", - "\n", - "Hence the following cell will install the librairies and create and run a virtual screen ๐Ÿ–ฅ" - ], - "metadata": { - "id": "bTpYcVZVMzUI" - } - }, - { - "cell_type": "code", - "source": [ - "!pip install setuptools==65.5.0" - ], - "metadata": { - "id": "Fd731S8-NuJA" - }, - "execution_count": null, - "outputs": [] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "jV6wjQ7Be7p5" - }, - "outputs": [], - "source": [ - "%%capture\n", - "!apt install python-opengl\n", - "!apt install ffmpeg\n", - "!apt install xvfb\n", - "!apt install swig cmake\n", - "!pip install pyglet==1.5\n", - "!pip3 install pyvirtualdisplay" - ] - }, - { - "cell_type": "code", - "source": [ - "# Virtual display\n", - "from pyvirtualdisplay import Display\n", - "\n", - "virtual_display = Display(visible=0, size=(1400, 900))\n", - "virtual_display.start()" - ], - "metadata": { - "id": "ww5PQH1gNLI4" - }, - "execution_count": null, - "outputs": [] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "ncIgfNf3mOtc" - }, - "source": [ - "## Install dependencies ๐Ÿ”ฝ\n", - "For this exercise, we use `gym==0.22`." - ] - }, - { - "cell_type": "code", - "source": [ - "!pip install gym==0.22\n", - "!pip install imageio-ffmpeg\n", - "!pip install huggingface_hub\n", - "!pip install gym[box2d]==0.22" - ], - "metadata": { - "id": "9xZQFTPcsKUK" - }, - "execution_count": null, - "outputs": [] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "oDkUufewmq6v" - }, - "source": [ - "## Let's code PPO from scratch with Costa Huang tutorial\n", - "- For the core implementation of PPO we're going to use the excellent [Costa Huang](https://costa.sh/) tutorial.\n", - "- In addition to the tutorial, to go deeper you can read the 37 core implementation details: https://iclr-blog-track.github.io/2022/03/25/ppo-implementation-details/\n", - "\n", - "๐Ÿ‘‰ The video tutorial: https://youtu.be/MEt6rrxH8W4" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "aNgEL1_uvhaq" - }, - "outputs": [], - "source": [ - "from IPython.display import HTML\n", - "\n", - "HTML('')" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "f34ILn7AvTbt" - }, - "source": [ - "- The best is to code first on the cell below, this way, if you kill the machine **you don't loose the implementation**." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "_bE708C6mhE7" - }, - "outputs": [], - "source": [ - "### Your code here:" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "mk-a9CmNuS2W" - }, - "source": [ - "## Add the Hugging Face Integration ๐Ÿค—\n", - "- In order to push our model to the Hub, we need to define a function `package_to_hub`" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "TPi1Nme-oGWd" - }, - "source": [ - "- Add dependencies we need to push our model to the Hub" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "Sj8bz-AmoNVj" - }, - "outputs": [], - "source": [ - "from huggingface_hub import HfApi, upload_folder\n", - "from huggingface_hub.repocard import metadata_eval_result, metadata_save\n", - "\n", - "from pathlib import Path\n", - "import datetime\n", - "import tempfile\n", - "import json\n", - "import shutil\n", - "import imageio\n", - "\n", - "from wasabi import Printer\n", - "msg = Printer()" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "5rDr8-lWn0zi" - }, - "source": [ - "- Add new argument in `parse_args()` function to define the repo-id where we want to push the model." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "iHQiqQEFn0QH" - }, - "outputs": [], - "source": [ - "# Adding HuggingFace argument\n", - "parser.add_argument(\"--repo-id\", type=str, default=\"ThomasSimonini/ppo-CartPole-v1\", help=\"id of the model repository from the Hugging Face Hub {username/repo_name}\")" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "blLZMiBAoUVT" - }, - "source": [ - "- Next, we add the methods needed to push the model to the Hub\n", - "\n", - "- These methods will:\n", - " - `_evalutate_agent()`: evaluate the agent.\n", - " - `_generate_model_card()`: generate the model card of your agent.\n", - " - `_record_video()`: record a video of your agent." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "WlLcz4L9odXs" - }, - "outputs": [], - "source": [ - "def package_to_hub(repo_id, \n", - " model,\n", - " hyperparameters,\n", - " eval_env,\n", - " video_fps=30,\n", - " commit_message=\"Push agent to the Hub\",\n", - " token= None,\n", - " logs=None\n", - " ):\n", - " \"\"\"\n", - " Evaluate, Generate a video and Upload a model to Hugging Face Hub.\n", - " This method does the complete pipeline:\n", - " - It evaluates the model\n", - " - It generates the model card\n", - " - It generates a replay video of the agent\n", - " - It pushes everything to the hub\n", - " :param repo_id: id of the model repository from the Hugging Face Hub\n", - " :param model: trained model\n", - " :param eval_env: environment used to evaluate the agent\n", - " :param fps: number of fps for rendering the video\n", - " :param commit_message: commit message\n", - " :param logs: directory on local machine of tensorboard logs you'd like to upload\n", - " \"\"\"\n", - " msg.info(\n", - " \"This function will save, evaluate, generate a video of your agent, \"\n", - " \"create a model card and push everything to the hub. \"\n", - " \"It might take up to 1min. \\n \"\n", - " \"This is a work in progress: if you encounter a bug, please open an issue.\"\n", - " )\n", - " # Step 1: Clone or create the repo\n", - " repo_url = HfApi().create_repo(\n", - " repo_id=repo_id,\n", - " token=token,\n", - " private=False,\n", - " exist_ok=True,\n", - " )\n", - " \n", - " with tempfile.TemporaryDirectory() as tmpdirname:\n", - " tmpdirname = Path(tmpdirname)\n", - "\n", - " # Step 2: Save the model\n", - " torch.save(model.state_dict(), tmpdirname / \"model.pt\")\n", - " \n", - " # Step 3: Evaluate the model and build JSON\n", - " mean_reward, std_reward = _evaluate_agent(eval_env, \n", - " 10, \n", - " model)\n", - "\n", - " # First get datetime\n", - " eval_datetime = datetime.datetime.now()\n", - " eval_form_datetime = eval_datetime.isoformat()\n", - "\n", - " evaluate_data = {\n", - " \"env_id\": hyperparameters.env_id, \n", - " \"mean_reward\": mean_reward,\n", - " \"std_reward\": std_reward,\n", - " \"n_evaluation_episodes\": 10,\n", - " \"eval_datetime\": eval_form_datetime,\n", - " }\n", - " \n", - " # Write a JSON file\n", - " with open(tmpdirname / \"results.json\", \"w\") as outfile:\n", - " json.dump(evaluate_data, outfile)\n", - "\n", - " # Step 4: Generate a video\n", - " video_path = tmpdirname / \"replay.mp4\"\n", - " record_video(eval_env, model, video_path, video_fps)\n", - " \n", - " # Step 5: Generate the model card\n", - " generated_model_card, metadata = _generate_model_card(\"PPO\", hyperparameters.env_id, mean_reward, std_reward, hyperparameters)\n", - " _save_model_card(tmpdirname, generated_model_card, metadata)\n", - "\n", - " # Step 6: Add logs if needed\n", - " if logs:\n", - " _add_logdir(tmpdirname, Path(logs))\n", - " \n", - " msg.info(f\"Pushing repo {repo_id} to the Hugging Face Hub\")\n", - " \n", - " repo_url = upload_folder(\n", - " repo_id=repo_id,\n", - " folder_path=tmpdirname,\n", - " path_in_repo=\"\",\n", - " commit_message=commit_message,\n", - " token=token,\n", - " )\n", - "\n", - " msg.info(f\"Your model is pushed to the Hub. You can view your model here: {repo_url}\")\n", - " return repo_url\n", - "\n", - "\n", - "def _evaluate_agent(env, n_eval_episodes, policy):\n", - " \"\"\"\n", - " Evaluate the agent for ``n_eval_episodes`` episodes and returns average reward and std of reward.\n", - " :param env: The evaluation environment\n", - " :param n_eval_episodes: Number of episode to evaluate the agent\n", - " :param policy: The agent\n", - " \"\"\"\n", - " episode_rewards = []\n", - " for episode in range(n_eval_episodes):\n", - " state = env.reset()\n", - " step = 0\n", - " done = False\n", - " total_rewards_ep = 0\n", - " \n", - " while done is False:\n", - " state = torch.Tensor(state).to(device)\n", - " action, _, _, _ = policy.get_action_and_value(state)\n", - " new_state, reward, done, info = env.step(action.cpu().numpy())\n", - " total_rewards_ep += reward \n", - " if done:\n", - " break\n", - " state = new_state\n", - " episode_rewards.append(total_rewards_ep)\n", - " mean_reward = np.mean(episode_rewards)\n", - " std_reward = np.std(episode_rewards)\n", - "\n", - " return mean_reward, std_reward\n", - "\n", - "\n", - "def record_video(env, policy, out_directory, fps=30):\n", - " images = [] \n", - " done = False\n", - " state = env.reset()\n", - " img = env.render(mode='rgb_array')\n", - " images.append(img)\n", - " while not done:\n", - " state = torch.Tensor(state).to(device)\n", - " # Take the action (index) that have the maximum expected future reward given that state\n", - " action, _, _, _ = policy.get_action_and_value(state)\n", - " state, reward, done, info = env.step(action.cpu().numpy()) # We directly put next_state = state for recording logic\n", - " img = env.render(mode='rgb_array')\n", - " images.append(img)\n", - " imageio.mimsave(out_directory, [np.array(img) for i, img in enumerate(images)], fps=fps)\n", - "\n", - "\n", - "def _generate_model_card(model_name, env_id, mean_reward, std_reward, hyperparameters):\n", - " \"\"\"\n", - " Generate the model card for the Hub\n", - " :param model_name: name of the model\n", - " :env_id: name of the environment\n", - " :mean_reward: mean reward of the agent\n", - " :std_reward: standard deviation of the mean reward of the agent\n", - " :hyperparameters: training arguments\n", - " \"\"\"\n", - " # Step 1: Select the tags\n", - " metadata = generate_metadata(model_name, env_id, mean_reward, std_reward)\n", - "\n", - " # Transform the hyperparams namespace to string\n", - " converted_dict = vars(hyperparameters)\n", - " converted_str = str(converted_dict)\n", - " converted_str = converted_str.split(\", \")\n", - " converted_str = '\\n'.join(converted_str)\n", - " \n", - " # Step 2: Generate the model card\n", - " model_card = f\"\"\"\n", - " # PPO Agent Playing {env_id}\n", - "\n", - " This is a trained model of a PPO agent playing {env_id}.\n", - " \n", - " # Hyperparameters\n", - " ```python\n", - " {converted_str}\n", - " ```\n", - " \"\"\"\n", - " return model_card, metadata\n", - "\n", - "\n", - "def generate_metadata(model_name, env_id, mean_reward, std_reward):\n", - " \"\"\"\n", - " Define the tags for the model card\n", - " :param model_name: name of the model\n", - " :param env_id: name of the environment\n", - " :mean_reward: mean reward of the agent\n", - " :std_reward: standard deviation of the mean reward of the agent\n", - " \"\"\"\n", - " metadata = {}\n", - " metadata[\"tags\"] = [\n", - " env_id,\n", - " \"ppo\",\n", - " \"deep-reinforcement-learning\",\n", - " \"reinforcement-learning\",\n", - " \"custom-implementation\",\n", - " \"deep-rl-course\"\n", - " ]\n", - "\n", - " # Add metrics\n", - " eval = metadata_eval_result(\n", - " model_pretty_name=model_name,\n", - " task_pretty_name=\"reinforcement-learning\",\n", - " task_id=\"reinforcement-learning\",\n", - " metrics_pretty_name=\"mean_reward\",\n", - " metrics_id=\"mean_reward\",\n", - " metrics_value=f\"{mean_reward:.2f} +/- {std_reward:.2f}\",\n", - " dataset_pretty_name=env_id,\n", - " dataset_id=env_id,\n", - " )\n", - "\n", - " # Merges both dictionaries\n", - " metadata = {**metadata, **eval}\n", - "\n", - " return metadata\n", - "\n", - "\n", - "def _save_model_card(local_path, generated_model_card, metadata):\n", - " \"\"\"Saves a model card for the repository.\n", - " :param local_path: repository directory\n", - " :param generated_model_card: model card generated by _generate_model_card()\n", - " :param metadata: metadata\n", - " \"\"\"\n", - " readme_path = local_path / \"README.md\"\n", - " readme = \"\"\n", - " if readme_path.exists():\n", - " with readme_path.open(\"r\", encoding=\"utf8\") as f:\n", - " readme = f.read()\n", - " else:\n", - " readme = generated_model_card\n", - "\n", - " with readme_path.open(\"w\", encoding=\"utf-8\") as f:\n", - " f.write(readme)\n", - "\n", - " # Save our metrics to Readme metadata\n", - " metadata_save(readme_path, metadata)\n", - "\n", - "\n", - "def _add_logdir(local_path: Path, logdir: Path):\n", - " \"\"\"Adds a logdir to the repository.\n", - " :param local_path: repository directory\n", - " :param logdir: logdir directory\n", - " \"\"\"\n", - " if logdir.exists() and logdir.is_dir():\n", - " # Add the logdir to the repository under new dir called logs\n", - " repo_logdir = local_path / \"logs\"\n", - " \n", - " # Delete current logs if they exist\n", - " if repo_logdir.exists():\n", - " shutil.rmtree(repo_logdir)\n", - "\n", - " # Copy logdir into repo logdir\n", - " shutil.copytree(logdir, repo_logdir)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "TqX8z8_rooD6" - }, - "source": [ - "- Finally, we call this function at the end of the PPO training" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "I8V1vNiTo2hL" - }, - "outputs": [], - "source": [ - "# Create the evaluation environment\n", - "eval_env = gym.make(args.env_id)\n", - "\n", - "package_to_hub(repo_id = args.repo_id,\n", - " model = agent, # The model we want to save\n", - " hyperparameters = args,\n", - " eval_env = gym.make(args.env_id),\n", - " logs= f\"runs/{run_name}\",\n", - " )" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "muCCzed4o5TC" - }, - "source": [ - "- Here's what look the ppo.py final file" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "LviRdtXgo7kF" - }, - "outputs": [], - "source": [ - "# docs and experiment results can be found at https://docs.cleanrl.dev/rl-algorithms/ppo/#ppopy\n", - "\n", - "import argparse\n", - "import os\n", - "import random\n", - "import time\n", - "from distutils.util import strtobool\n", - "\n", - "import gym\n", - "import numpy as np\n", - "import torch\n", - "import torch.nn as nn\n", - "import torch.optim as optim\n", - "from torch.distributions.categorical import Categorical\n", - "from torch.utils.tensorboard import SummaryWriter\n", - "\n", - "from huggingface_hub import HfApi, upload_folder\n", - "from huggingface_hub.repocard import metadata_eval_result, metadata_save\n", - "\n", - "from pathlib import Path\n", - "import datetime\n", - "import tempfile\n", - "import json\n", - "import shutil\n", - "import imageio\n", - "\n", - "from wasabi import Printer\n", - "msg = Printer()\n", - "\n", - "def parse_args():\n", - " # fmt: off\n", - " parser = argparse.ArgumentParser()\n", - " parser.add_argument(\"--exp-name\", type=str, default=os.path.basename(__file__).rstrip(\".py\"),\n", - " help=\"the name of this experiment\")\n", - " parser.add_argument(\"--seed\", type=int, default=1,\n", - " help=\"seed of the experiment\")\n", - " parser.add_argument(\"--torch-deterministic\", type=lambda x: bool(strtobool(x)), default=True, nargs=\"?\", const=True,\n", - " help=\"if toggled, `torch.backends.cudnn.deterministic=False`\")\n", - " parser.add_argument(\"--cuda\", type=lambda x: bool(strtobool(x)), default=True, nargs=\"?\", const=True,\n", - " help=\"if toggled, cuda will be enabled by default\")\n", - " parser.add_argument(\"--track\", type=lambda x: bool(strtobool(x)), default=False, nargs=\"?\", const=True,\n", - " help=\"if toggled, this experiment will be tracked with Weights and Biases\")\n", - " parser.add_argument(\"--wandb-project-name\", type=str, default=\"cleanRL\",\n", - " help=\"the wandb's project name\")\n", - " parser.add_argument(\"--wandb-entity\", type=str, default=None,\n", - " help=\"the entity (team) of wandb's project\")\n", - " parser.add_argument(\"--capture-video\", type=lambda x: bool(strtobool(x)), default=False, nargs=\"?\", const=True,\n", - " help=\"weather to capture videos of the agent performances (check out `videos` folder)\")\n", - "\n", - " # Algorithm specific arguments\n", - " parser.add_argument(\"--env-id\", type=str, default=\"CartPole-v1\",\n", - " help=\"the id of the environment\")\n", - " parser.add_argument(\"--total-timesteps\", type=int, default=50000,\n", - " help=\"total timesteps of the experiments\")\n", - " parser.add_argument(\"--learning-rate\", type=float, default=2.5e-4,\n", - " help=\"the learning rate of the optimizer\")\n", - " parser.add_argument(\"--num-envs\", type=int, default=4,\n", - " help=\"the number of parallel game environments\")\n", - " parser.add_argument(\"--num-steps\", type=int, default=128,\n", - " help=\"the number of steps to run in each environment per policy rollout\")\n", - " parser.add_argument(\"--anneal-lr\", type=lambda x: bool(strtobool(x)), default=True, nargs=\"?\", const=True,\n", - " help=\"Toggle learning rate annealing for policy and value networks\")\n", - " parser.add_argument(\"--gae\", type=lambda x: bool(strtobool(x)), default=True, nargs=\"?\", const=True,\n", - " help=\"Use GAE for advantage computation\")\n", - " parser.add_argument(\"--gamma\", type=float, default=0.99,\n", - " help=\"the discount factor gamma\")\n", - " parser.add_argument(\"--gae-lambda\", type=float, default=0.95,\n", - " help=\"the lambda for the general advantage estimation\")\n", - " parser.add_argument(\"--num-minibatches\", type=int, default=4,\n", - " help=\"the number of mini-batches\")\n", - " parser.add_argument(\"--update-epochs\", type=int, default=4,\n", - " help=\"the K epochs to update the policy\")\n", - " parser.add_argument(\"--norm-adv\", type=lambda x: bool(strtobool(x)), default=True, nargs=\"?\", const=True,\n", - " help=\"Toggles advantages normalization\")\n", - " parser.add_argument(\"--clip-coef\", type=float, default=0.2,\n", - " help=\"the surrogate clipping coefficient\")\n", - " parser.add_argument(\"--clip-vloss\", type=lambda x: bool(strtobool(x)), default=True, nargs=\"?\", const=True,\n", - " help=\"Toggles whether or not to use a clipped loss for the value function, as per the paper.\")\n", - " parser.add_argument(\"--ent-coef\", type=float, default=0.01,\n", - " help=\"coefficient of the entropy\")\n", - " parser.add_argument(\"--vf-coef\", type=float, default=0.5,\n", - " help=\"coefficient of the value function\")\n", - " parser.add_argument(\"--max-grad-norm\", type=float, default=0.5,\n", - " help=\"the maximum norm for the gradient clipping\")\n", - " parser.add_argument(\"--target-kl\", type=float, default=None,\n", - " help=\"the target KL divergence threshold\")\n", - " \n", - " # Adding HuggingFace argument\n", - " parser.add_argument(\"--repo-id\", type=str, default=\"ThomasSimonini/ppo-CartPole-v1\", help=\"id of the model repository from the Hugging Face Hub {username/repo_name}\")\n", - "\n", - " args = parser.parse_args()\n", - " args.batch_size = int(args.num_envs * args.num_steps)\n", - " args.minibatch_size = int(args.batch_size // args.num_minibatches)\n", - " # fmt: on\n", - " return args\n", - "\n", - "def package_to_hub(repo_id, \n", - " model,\n", - " hyperparameters,\n", - " eval_env,\n", - " video_fps=30,\n", - " commit_message=\"Push agent to the Hub\",\n", - " token= None,\n", - " logs=None\n", - " ):\n", - " \"\"\"\n", - " Evaluate, Generate a video and Upload a model to Hugging Face Hub.\n", - " This method does the complete pipeline:\n", - " - It evaluates the model\n", - " - It generates the model card\n", - " - It generates a replay video of the agent\n", - " - It pushes everything to the hub\n", - " :param repo_id: id of the model repository from the Hugging Face Hub\n", - " :param model: trained model\n", - " :param eval_env: environment used to evaluate the agent\n", - " :param fps: number of fps for rendering the video\n", - " :param commit_message: commit message\n", - " :param logs: directory on local machine of tensorboard logs you'd like to upload\n", - " \"\"\"\n", - " msg.info(\n", - " \"This function will save, evaluate, generate a video of your agent, \"\n", - " \"create a model card and push everything to the hub. \"\n", - " \"It might take up to 1min. \\n \"\n", - " \"This is a work in progress: if you encounter a bug, please open an issue.\"\n", - " )\n", - " # Step 1: Clone or create the repo\n", - " repo_url = HfApi().create_repo(\n", - " repo_id=repo_id,\n", - " token=token,\n", - " private=False,\n", - " exist_ok=True,\n", - " )\n", - " \n", - " with tempfile.TemporaryDirectory() as tmpdirname:\n", - " tmpdirname = Path(tmpdirname)\n", - "\n", - " # Step 2: Save the model\n", - " torch.save(model.state_dict(), tmpdirname / \"model.pt\")\n", - " \n", - " # Step 3: Evaluate the model and build JSON\n", - " mean_reward, std_reward = _evaluate_agent(eval_env, \n", - " 10, \n", - " model)\n", - "\n", - " # First get datetime\n", - " eval_datetime = datetime.datetime.now()\n", - " eval_form_datetime = eval_datetime.isoformat()\n", - "\n", - " evaluate_data = {\n", - " \"env_id\": hyperparameters.env_id, \n", - " \"mean_reward\": mean_reward,\n", - " \"std_reward\": std_reward,\n", - " \"n_evaluation_episodes\": 10,\n", - " \"eval_datetime\": eval_form_datetime,\n", - " }\n", - " \n", - " # Write a JSON file\n", - " with open(tmpdirname / \"results.json\", \"w\") as outfile:\n", - " json.dump(evaluate_data, outfile)\n", - "\n", - " # Step 4: Generate a video\n", - " video_path = tmpdirname / \"replay.mp4\"\n", - " record_video(eval_env, model, video_path, video_fps)\n", - " \n", - " # Step 5: Generate the model card\n", - " generated_model_card, metadata = _generate_model_card(\"PPO\", hyperparameters.env_id, mean_reward, std_reward, hyperparameters)\n", - " _save_model_card(tmpdirname, generated_model_card, metadata)\n", - "\n", - " # Step 6: Add logs if needed\n", - " if logs:\n", - " _add_logdir(tmpdirname, Path(logs))\n", - " \n", - " msg.info(f\"Pushing repo {repo_id} to the Hugging Face Hub\")\n", - " \n", - " repo_url = upload_folder(\n", - " repo_id=repo_id,\n", - " folder_path=tmpdirname,\n", - " path_in_repo=\"\",\n", - " commit_message=commit_message,\n", - " token=token,\n", - " )\n", - "\n", - " msg.info(f\"Your model is pushed to the Hub. You can view your model here: {repo_url}\")\n", - " return repo_url\n", - "\n", - "def _evaluate_agent(env, n_eval_episodes, policy):\n", - " \"\"\"\n", - " Evaluate the agent for ``n_eval_episodes`` episodes and returns average reward and std of reward.\n", - " :param env: The evaluation environment\n", - " :param n_eval_episodes: Number of episode to evaluate the agent\n", - " :param policy: The agent\n", - " \"\"\"\n", - " episode_rewards = []\n", - " for episode in range(n_eval_episodes):\n", - " state = env.reset()\n", - " step = 0\n", - " done = False\n", - " total_rewards_ep = 0\n", - " \n", - " while done is False:\n", - " state = torch.Tensor(state).to(device)\n", - " action, _, _, _ = policy.get_action_and_value(state)\n", - " new_state, reward, done, info = env.step(action.cpu().numpy())\n", - " total_rewards_ep += reward \n", - " if done:\n", - " break\n", - " state = new_state\n", - " episode_rewards.append(total_rewards_ep)\n", - " mean_reward = np.mean(episode_rewards)\n", - " std_reward = np.std(episode_rewards)\n", - "\n", - " return mean_reward, std_reward\n", - "\n", - "\n", - "def record_video(env, policy, out_directory, fps=30):\n", - " images = [] \n", - " done = False\n", - " state = env.reset()\n", - " img = env.render(mode='rgb_array')\n", - " images.append(img)\n", - " while not done:\n", - " state = torch.Tensor(state).to(device)\n", - " # Take the action (index) that have the maximum expected future reward given that state\n", - " action, _, _, _ = policy.get_action_and_value(state)\n", - " state, reward, done, info = env.step(action.cpu().numpy()) # We directly put next_state = state for recording logic\n", - " img = env.render(mode='rgb_array')\n", - " images.append(img)\n", - " imageio.mimsave(out_directory, [np.array(img) for i, img in enumerate(images)], fps=fps)\n", - "\n", - "\n", - "def _generate_model_card(model_name, env_id, mean_reward, std_reward, hyperparameters):\n", - " \"\"\"\n", - " Generate the model card for the Hub\n", - " :param model_name: name of the model\n", - " :env_id: name of the environment\n", - " :mean_reward: mean reward of the agent\n", - " :std_reward: standard deviation of the mean reward of the agent\n", - " :hyperparameters: training arguments\n", - " \"\"\"\n", - " # Step 1: Select the tags\n", - " metadata = generate_metadata(model_name, env_id, mean_reward, std_reward)\n", - "\n", - " # Transform the hyperparams namespace to string\n", - " converted_dict = vars(hyperparameters)\n", - " converted_str = str(converted_dict)\n", - " converted_str = converted_str.split(\", \")\n", - " converted_str = '\\n'.join(converted_str)\n", - " \n", - " # Step 2: Generate the model card\n", - " model_card = f\"\"\"\n", - " # PPO Agent Playing {env_id}\n", - "\n", - " This is a trained model of a PPO agent playing {env_id}.\n", - " \n", - " # Hyperparameters\n", - " ```python\n", - " {converted_str}\n", - " ```\n", - " \"\"\"\n", - " return model_card, metadata\n", - "\n", - "def generate_metadata(model_name, env_id, mean_reward, std_reward):\n", - " \"\"\"\n", - " Define the tags for the model card\n", - " :param model_name: name of the model\n", - " :param env_id: name of the environment\n", - " :mean_reward: mean reward of the agent\n", - " :std_reward: standard deviation of the mean reward of the agent\n", - " \"\"\"\n", - " metadata = {}\n", - " metadata[\"tags\"] = [\n", - " env_id,\n", - " \"ppo\",\n", - " \"deep-reinforcement-learning\",\n", - " \"reinforcement-learning\",\n", - " \"custom-implementation\",\n", - " \"deep-rl-course\"\n", - " ]\n", - "\n", - " # Add metrics\n", - " eval = metadata_eval_result(\n", - " model_pretty_name=model_name,\n", - " task_pretty_name=\"reinforcement-learning\",\n", - " task_id=\"reinforcement-learning\",\n", - " metrics_pretty_name=\"mean_reward\",\n", - " metrics_id=\"mean_reward\",\n", - " metrics_value=f\"{mean_reward:.2f} +/- {std_reward:.2f}\",\n", - " dataset_pretty_name=env_id,\n", - " dataset_id=env_id,\n", - " )\n", - "\n", - " # Merges both dictionaries\n", - " metadata = {**metadata, **eval}\n", - "\n", - " return metadata\n", - "\n", - "def _save_model_card(local_path, generated_model_card, metadata):\n", - " \"\"\"Saves a model card for the repository.\n", - " :param local_path: repository directory\n", - " :param generated_model_card: model card generated by _generate_model_card()\n", - " :param metadata: metadata\n", - " \"\"\"\n", - " readme_path = local_path / \"README.md\"\n", - " readme = \"\"\n", - " if readme_path.exists():\n", - " with readme_path.open(\"r\", encoding=\"utf8\") as f:\n", - " readme = f.read()\n", - " else:\n", - " readme = generated_model_card\n", - "\n", - " with readme_path.open(\"w\", encoding=\"utf-8\") as f:\n", - " f.write(readme)\n", - "\n", - " # Save our metrics to Readme metadata\n", - " metadata_save(readme_path, metadata)\n", - "\n", - "def _add_logdir(local_path: Path, logdir: Path):\n", - " \"\"\"Adds a logdir to the repository.\n", - " :param local_path: repository directory\n", - " :param logdir: logdir directory\n", - " \"\"\"\n", - " if logdir.exists() and logdir.is_dir():\n", - " # Add the logdir to the repository under new dir called logs\n", - " repo_logdir = local_path / \"logs\"\n", - " \n", - " # Delete current logs if they exist\n", - " if repo_logdir.exists():\n", - " shutil.rmtree(repo_logdir)\n", - "\n", - " # Copy logdir into repo logdir\n", - " shutil.copytree(logdir, repo_logdir)\n", - "\n", - "def make_env(env_id, seed, idx, capture_video, run_name):\n", - " def thunk():\n", - " env = gym.make(env_id)\n", - " env = gym.wrappers.RecordEpisodeStatistics(env)\n", - " if capture_video:\n", - " if idx == 0:\n", - " env = gym.wrappers.RecordVideo(env, f\"videos/{run_name}\")\n", - " env.seed(seed)\n", - " env.action_space.seed(seed)\n", - " env.observation_space.seed(seed)\n", - " return env\n", - "\n", - " return thunk\n", - "\n", - "\n", - "def layer_init(layer, std=np.sqrt(2), bias_const=0.0):\n", - " torch.nn.init.orthogonal_(layer.weight, std)\n", - " torch.nn.init.constant_(layer.bias, bias_const)\n", - " return layer\n", - "\n", - "\n", - "class Agent(nn.Module):\n", - " def __init__(self, envs):\n", - " super().__init__()\n", - " self.critic = nn.Sequential(\n", - " layer_init(nn.Linear(np.array(envs.single_observation_space.shape).prod(), 64)),\n", - " nn.Tanh(),\n", - " layer_init(nn.Linear(64, 64)),\n", - " nn.Tanh(),\n", - " layer_init(nn.Linear(64, 1), std=1.0),\n", - " )\n", - " self.actor = nn.Sequential(\n", - " layer_init(nn.Linear(np.array(envs.single_observation_space.shape).prod(), 64)),\n", - " nn.Tanh(),\n", - " layer_init(nn.Linear(64, 64)),\n", - " nn.Tanh(),\n", - " layer_init(nn.Linear(64, envs.single_action_space.n), std=0.01),\n", - " )\n", - "\n", - " def get_value(self, x):\n", - " return self.critic(x)\n", - "\n", - " def get_action_and_value(self, x, action=None):\n", - " logits = self.actor(x)\n", - " probs = Categorical(logits=logits)\n", - " if action is None:\n", - " action = probs.sample()\n", - " return action, probs.log_prob(action), probs.entropy(), self.critic(x)\n", - "\n", - "\n", - "if __name__ == \"__main__\":\n", - " args = parse_args()\n", - " run_name = f\"{args.env_id}__{args.exp_name}__{args.seed}__{int(time.time())}\"\n", - " if args.track:\n", - " import wandb\n", - "\n", - " wandb.init(\n", - " project=args.wandb_project_name,\n", - " entity=args.wandb_entity,\n", - " sync_tensorboard=True,\n", - " config=vars(args),\n", - " name=run_name,\n", - " monitor_gym=True,\n", - " save_code=True,\n", - " )\n", - " writer = SummaryWriter(f\"runs/{run_name}\")\n", - " writer.add_text(\n", - " \"hyperparameters\",\n", - " \"|param|value|\\n|-|-|\\n%s\" % (\"\\n\".join([f\"|{key}|{value}|\" for key, value in vars(args).items()])),\n", - " )\n", - "\n", - " # TRY NOT TO MODIFY: seeding\n", - " random.seed(args.seed)\n", - " np.random.seed(args.seed)\n", - " torch.manual_seed(args.seed)\n", - " torch.backends.cudnn.deterministic = args.torch_deterministic\n", - "\n", - " device = torch.device(\"cuda\" if torch.cuda.is_available() and args.cuda else \"cpu\")\n", - "\n", - " # env setup\n", - " envs = gym.vector.SyncVectorEnv(\n", - " [make_env(args.env_id, args.seed + i, i, args.capture_video, run_name) for i in range(args.num_envs)]\n", - " )\n", - " assert isinstance(envs.single_action_space, gym.spaces.Discrete), \"only discrete action space is supported\"\n", - "\n", - " agent = Agent(envs).to(device)\n", - " optimizer = optim.Adam(agent.parameters(), lr=args.learning_rate, eps=1e-5)\n", - "\n", - " # ALGO Logic: Storage setup\n", - " obs = torch.zeros((args.num_steps, args.num_envs) + envs.single_observation_space.shape).to(device)\n", - " actions = torch.zeros((args.num_steps, args.num_envs) + envs.single_action_space.shape).to(device)\n", - " logprobs = torch.zeros((args.num_steps, args.num_envs)).to(device)\n", - " rewards = torch.zeros((args.num_steps, args.num_envs)).to(device)\n", - " dones = torch.zeros((args.num_steps, args.num_envs)).to(device)\n", - " values = torch.zeros((args.num_steps, args.num_envs)).to(device)\n", - "\n", - " # TRY NOT TO MODIFY: start the game\n", - " global_step = 0\n", - " start_time = time.time()\n", - " next_obs = torch.Tensor(envs.reset()).to(device)\n", - " next_done = torch.zeros(args.num_envs).to(device)\n", - " num_updates = args.total_timesteps // args.batch_size\n", - "\n", - " for update in range(1, num_updates + 1):\n", - " # Annealing the rate if instructed to do so.\n", - " if args.anneal_lr:\n", - " frac = 1.0 - (update - 1.0) / num_updates\n", - " lrnow = frac * args.learning_rate\n", - " optimizer.param_groups[0][\"lr\"] = lrnow\n", - "\n", - " for step in range(0, args.num_steps):\n", - " global_step += 1 * args.num_envs\n", - " obs[step] = next_obs\n", - " dones[step] = next_done\n", - "\n", - " # ALGO LOGIC: action logic\n", - " with torch.no_grad():\n", - " action, logprob, _, value = agent.get_action_and_value(next_obs)\n", - " values[step] = value.flatten()\n", - " actions[step] = action\n", - " logprobs[step] = logprob\n", - "\n", - " # TRY NOT TO MODIFY: execute the game and log data.\n", - " next_obs, reward, done, info = envs.step(action.cpu().numpy())\n", - " rewards[step] = torch.tensor(reward).to(device).view(-1)\n", - " next_obs, next_done = torch.Tensor(next_obs).to(device), torch.Tensor(done).to(device)\n", - "\n", - " for item in info:\n", - " if \"episode\" in item.keys():\n", - " print(f\"global_step={global_step}, episodic_return={item['episode']['r']}\")\n", - " writer.add_scalar(\"charts/episodic_return\", item[\"episode\"][\"r\"], global_step)\n", - " writer.add_scalar(\"charts/episodic_length\", item[\"episode\"][\"l\"], global_step)\n", - " break\n", - "\n", - " # bootstrap value if not done\n", - " with torch.no_grad():\n", - " next_value = agent.get_value(next_obs).reshape(1, -1)\n", - " if args.gae:\n", - " advantages = torch.zeros_like(rewards).to(device)\n", - " lastgaelam = 0\n", - " for t in reversed(range(args.num_steps)):\n", - " if t == args.num_steps - 1:\n", - " nextnonterminal = 1.0 - next_done\n", - " nextvalues = next_value\n", - " else:\n", - " nextnonterminal = 1.0 - dones[t + 1]\n", - " nextvalues = values[t + 1]\n", - " delta = rewards[t] + args.gamma * nextvalues * nextnonterminal - values[t]\n", - " advantages[t] = lastgaelam = delta + args.gamma * args.gae_lambda * nextnonterminal * lastgaelam\n", - " returns = advantages + values\n", - " else:\n", - " returns = torch.zeros_like(rewards).to(device)\n", - " for t in reversed(range(args.num_steps)):\n", - " if t == args.num_steps - 1:\n", - " nextnonterminal = 1.0 - next_done\n", - " next_return = next_value\n", - " else:\n", - " nextnonterminal = 1.0 - dones[t + 1]\n", - " next_return = returns[t + 1]\n", - " returns[t] = rewards[t] + args.gamma * nextnonterminal * next_return\n", - " advantages = returns - values\n", - "\n", - " # flatten the batch\n", - " b_obs = obs.reshape((-1,) + envs.single_observation_space.shape)\n", - " b_logprobs = logprobs.reshape(-1)\n", - " b_actions = actions.reshape((-1,) + envs.single_action_space.shape)\n", - " b_advantages = advantages.reshape(-1)\n", - " b_returns = returns.reshape(-1)\n", - " b_values = values.reshape(-1)\n", - "\n", - " # Optimizing the policy and value network\n", - " b_inds = np.arange(args.batch_size)\n", - " clipfracs = []\n", - " for epoch in range(args.update_epochs):\n", - " np.random.shuffle(b_inds)\n", - " for start in range(0, args.batch_size, args.minibatch_size):\n", - " end = start + args.minibatch_size\n", - " mb_inds = b_inds[start:end]\n", - "\n", - " _, newlogprob, entropy, newvalue = agent.get_action_and_value(b_obs[mb_inds], b_actions.long()[mb_inds])\n", - " logratio = newlogprob - b_logprobs[mb_inds]\n", - " ratio = logratio.exp()\n", - "\n", - " with torch.no_grad():\n", - " # calculate approx_kl http://joschu.net/blog/kl-approx.html\n", - " old_approx_kl = (-logratio).mean()\n", - " approx_kl = ((ratio - 1) - logratio).mean()\n", - " clipfracs += [((ratio - 1.0).abs() > args.clip_coef).float().mean().item()]\n", - "\n", - " mb_advantages = b_advantages[mb_inds]\n", - " if args.norm_adv:\n", - " mb_advantages = (mb_advantages - mb_advantages.mean()) / (mb_advantages.std() + 1e-8)\n", - "\n", - " # Policy loss\n", - " pg_loss1 = -mb_advantages * ratio\n", - " pg_loss2 = -mb_advantages * torch.clamp(ratio, 1 - args.clip_coef, 1 + args.clip_coef)\n", - " pg_loss = torch.max(pg_loss1, pg_loss2).mean()\n", - "\n", - " # Value loss\n", - " newvalue = newvalue.view(-1)\n", - " if args.clip_vloss:\n", - " v_loss_unclipped = (newvalue - b_returns[mb_inds]) ** 2\n", - " v_clipped = b_values[mb_inds] + torch.clamp(\n", - " newvalue - b_values[mb_inds],\n", - " -args.clip_coef,\n", - " args.clip_coef,\n", - " )\n", - " v_loss_clipped = (v_clipped - b_returns[mb_inds]) ** 2\n", - " v_loss_max = torch.max(v_loss_unclipped, v_loss_clipped)\n", - " v_loss = 0.5 * v_loss_max.mean()\n", - " else:\n", - " v_loss = 0.5 * ((newvalue - b_returns[mb_inds]) ** 2).mean()\n", - "\n", - " entropy_loss = entropy.mean()\n", - " loss = pg_loss - args.ent_coef * entropy_loss + v_loss * args.vf_coef\n", - "\n", - " optimizer.zero_grad()\n", - " loss.backward()\n", - " nn.utils.clip_grad_norm_(agent.parameters(), args.max_grad_norm)\n", - " optimizer.step()\n", - "\n", - " if args.target_kl is not None:\n", - " if approx_kl > args.target_kl:\n", - " break\n", - "\n", - " y_pred, y_true = b_values.cpu().numpy(), b_returns.cpu().numpy()\n", - " var_y = np.var(y_true)\n", - " explained_var = np.nan if var_y == 0 else 1 - np.var(y_true - y_pred) / var_y\n", - "\n", - " # TRY NOT TO MODIFY: record rewards for plotting purposes\n", - " writer.add_scalar(\"charts/learning_rate\", optimizer.param_groups[0][\"lr\"], global_step)\n", - " writer.add_scalar(\"losses/value_loss\", v_loss.item(), global_step)\n", - " writer.add_scalar(\"losses/policy_loss\", pg_loss.item(), global_step)\n", - " writer.add_scalar(\"losses/entropy\", entropy_loss.item(), global_step)\n", - " writer.add_scalar(\"losses/old_approx_kl\", old_approx_kl.item(), global_step)\n", - " writer.add_scalar(\"losses/approx_kl\", approx_kl.item(), global_step)\n", - " writer.add_scalar(\"losses/clipfrac\", np.mean(clipfracs), global_step)\n", - " writer.add_scalar(\"losses/explained_variance\", explained_var, global_step)\n", - " print(\"SPS:\", int(global_step / (time.time() - start_time)))\n", - " writer.add_scalar(\"charts/SPS\", int(global_step / (time.time() - start_time)), global_step)\n", - "\n", - " envs.close()\n", - " writer.close()\n", - "\n", - " # Create the evaluation environment\n", - " eval_env = gym.make(args.env_id)\n", - "\n", - " package_to_hub(repo_id = args.repo_id,\n", - " model = agent, # The model we want to save\n", - " hyperparameters = args,\n", - " eval_env = gym.make(args.env_id),\n", - " logs= f\"runs/{run_name}\",\n", - " )\n", - " " - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "JquRrWytA6eo" - }, - "source": [ - "To be able to share your model with the community there are three more steps to follow:\n", - "\n", - "1๏ธโƒฃ (If it's not already done) create an account to HF โžก https://huggingface.co/join\n", - "\n", - "2๏ธโƒฃ Sign in and then, you need to store your authentication token from the Hugging Face website.\n", - "- Create a new token (https://huggingface.co/settings/tokens) **with write role**\n", - "\n", - "\"Create\n", - "\n", - "- Copy the token \n", - "- Run the cell below and paste the token" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "GZiFBBlzxzxY" - }, - "outputs": [], - "source": [ - "from huggingface_hub import notebook_login\n", - "notebook_login()\n", - "!git config --global credential.helper store" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "_tsf2uv0g_4p" - }, - "source": [ - "If you don't want to use a Google Colab or a Jupyter Notebook, you need to use this command instead: `huggingface-cli login`" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "jRqkGvk7pFQ6" - }, - "source": [ - "## Let's start the training ๐Ÿ”ฅ\n", - "- โš ๏ธ โš ๏ธ โš ๏ธ Don't use **the same repo id with the one you used for the Unit 1** \n", - "- Now that you've coded from scratch PPO and added the Hugging Face Integration, we're ready to start the training ๐Ÿ”ฅ" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "0tmEArP8ug2l" - }, - "source": [ - "- First, you need to copy all your code to a file you create called `ppo.py`" - ] - }, - { - "cell_type": "markdown", - "source": [ - "\"PPO\"/" - ], - "metadata": { - "id": "Sq0My0LOjPYR" - } - }, - { - "cell_type": "markdown", - "source": [ - "\"PPO\"/" - ], - "metadata": { - "id": "A8C-Q5ZyjUe3" - } - }, - { - "cell_type": "markdown", - "metadata": { - "id": "VrS80GmMu_j5" - }, - "source": [ - "- Now we just need to run this python script using `python .py` with the additional parameters we defined with `argparse`\n", - "\n", - "- You should modify more hyperparameters otherwise the training will not be super stable." - ] - }, - { - "cell_type": "code", - "source": [ - "!python ppo.py --env-id=\"LunarLander-v2\" --repo-id=\"YOUR_REPO_ID\" --total-timesteps=50000" - ], - "metadata": { - "id": "KXLih6mKseBs" - }, - "execution_count": null, - "outputs": [] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "eVsVJ5AdqLE7" - }, - "source": [ - "## Some additional challenges ๐Ÿ†\n", - "The best way to learn **is to try things by your own**! Why not trying another environment?\n" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "nYdl758GqLXT" - }, - "source": [ - "See you on Unit 8, part 2 where we going to train agents to play Doom ๐Ÿ”ฅ\n", - "## Keep learning, stay awesome ๐Ÿค—" - ] - } - ], - "metadata": { - "colab": { - "private_outputs": true, - "provenance": [], - "history_visible": true, - "include_colab_link": true - }, - "gpuClass": "standard", - "kernelspec": { - "display_name": "Python 3", - "name": "python3" - }, - "language_info": { - "name": "python" - }, - "accelerator": "GPU" - }, - "nbformat": 4, - "nbformat_minor": 0 -} \ No newline at end of file diff --git a/units/en/unit3/hands-on.mdx b/units/en/unit3/hands-on.mdx index c3c156a..eb1c9fc 100644 --- a/units/en/unit3/hands-on.mdx +++ b/units/en/unit3/hands-on.mdx @@ -33,46 +33,50 @@ And you can check your progress here ๐Ÿ‘‰ https://huggingface.co/spaces/ThomasSi [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/deep-rl-class/blob/master/notebooks/unit3/unit3.ipynb) - # Unit 3: Deep Q-Learning with Atari Games ๐Ÿ‘พ using RL Baselines3 Zoo Unit 3 Thumbnail -In this notebook, **you'll train a Deep Q-Learning agent** playing Space Invaders using [RL Baselines3 Zoo](https://github.com/DLR-RM/rl-baselines3-zoo), a training framework based on [Stable-Baselines3](https://stable-baselines3.readthedocs.io/en/master/) that provides scripts for training, evaluating agents, tuning parameters, plotting results and recording videos. +In this hands-on, **you'll train a Deep Q-Learning agent** playing Space Invaders using [RL Baselines3 Zoo](https://github.com/DLR-RM/rl-baselines3-zoo), a training framework based on [Stable-Baselines3](https://stable-baselines3.readthedocs.io/en/master/) that provides scripts for training, evaluating agents, tuning hyperparameters, plotting results and recording videos. We're using the [RL-Baselines-3 Zoo integration, a vanilla version of Deep Q-Learning](https://stable-baselines3.readthedocs.io/en/master/modules/dqn.html) with no extensions such as Double-DQN, Dueling-DQN, and Prioritized Experience Replay. -โฌ‡๏ธ Here is an example of what **you will achieve** โฌ‡๏ธ - -```python -%%html - -``` - ### ๐ŸŽฎ Environments: -- SpacesInvadersNoFrameskip-v4 +- [SpacesInvadersNoFrameskip-v4](https://gymnasium.farama.org/environments/atari/space_invaders/) + +You can see the difference between Space Invaders versions here ๐Ÿ‘‰ https://gymnasium.farama.org/environments/atari/space_invaders/#variants ### ๐Ÿ“š RL-Library: - [RL-Baselines3-Zoo](https://github.com/DLR-RM/rl-baselines3-zoo) -## Objectives ๐Ÿ† - -At the end of the notebook, you will: +## Objectives of this hands-on ๐Ÿ† +At the end of the hands-on, you will: - Be able to understand deeper **how RL Baselines3 Zoo works**. - Be able to **push your trained agent and the code to the Hub** with a nice video replay and an evaluation score ๐Ÿ”ฅ. - ## Prerequisites ๐Ÿ—๏ธ -Before diving into the notebook, you need to: + +Before diving into the hands-on, you need to: ๐Ÿ”ฒ ๐Ÿ“š **[Study Deep Q-Learning by reading Unit 3](https://huggingface.co/deep-rl-course/unit3/introduction)** ๐Ÿค— -We're constantly trying to improve our tutorials, so **if you find some issues in this notebook**, please [open an issue on the Github Repo](https://github.com/huggingface/deep-rl-class/issues). +We're constantly trying to improve our tutorials, so **if you find some issues in this hands-on**, please [open an issue on the Github Repo](https://github.com/huggingface/deep-rl-class/issues). # Let's train a Deep Q-Learning agent playing Atari' Space Invaders ๐Ÿ‘พ and upload it to the Hub. + +We strongly recommend students **to use Google Colab for the hands-on exercises instead of running them on their personal computers**. + +By using Google Colab, **you can focus on learning and experimenting without worrying about the technical aspects of setting up your environments**. + +To validate this hands-on for the certification process, you need to push your trained model to the Hub and **get a result of >= 200**. + +To find your result, go to the leaderboard and find your model, **the result = mean_reward - std of reward** + +For more information about the certification process, check this section ๐Ÿ‘‰ https://huggingface.co/deep-rl-course/en/unit0/introduction#certification-process + ## Set the GPU ๐Ÿ’ช - To **accelerate the agent's training, we'll use a GPU**. To do that, go to `Runtime > Change Runtime type` @@ -83,11 +87,37 @@ We're constantly trying to improve our tutorials, so **if you find some issues i GPU Step 2 +# Install RL-Baselines3 Zoo and its dependencies ๐Ÿ“š + +If you see `ERROR: pip's dependency resolver does not currently take into account all the packages that are installed.` **this is normal and it's not a critical error** there's a conflict of version. But the packages we need are installed. + +```python +# For now we install this update of RL-Baselines3 Zoo +pip install git+https://github.com/DLR-RM/rl-baselines3-zoo@update/hf +``` + +IF AND ONLY IF THE VERSION ABOVE DOES NOT EXIST ANYMORE. UNCOMMENT AND INSTALL THE ONE BELOW + +```python +#pip install rl_zoo3==2.0.0a9 +``` + +```bash +apt-get install swig cmake ffmpeg +``` + +To be able to use Atari games in Gymnasium we need to install atari package. And accept-rom-license to download the rom files (games files). + +```python +!pip install gymnasium[atari] +!pip install gymnasium[accept-rom-license] +``` + ## Create a virtual display ๐Ÿ”ฝ -During the notebook, we'll need to generate a replay video. To do so, with colab, **we need to have a virtual screen to be able to render the environment** (and thus record the frames). +During the hands-on, we'll need to generate a replay video. To do so, if you train it on a headless machine, **we need to have a virtual screen to be able to render the environment** (and thus record the frames). -The following cell will install the librairies and create and run a virtual screen ๐Ÿ–ฅ +Hence the following cell will install the librairies and create and run a virtual screen ๐Ÿ–ฅ ```bash apt install python-opengl @@ -96,14 +126,6 @@ apt install xvfb pip3 install pyvirtualdisplay ``` -```bash -apt-get install swig cmake freeglut3-dev -``` - -```bash -pip install pyglet==1.5.1 -``` - ```python # Virtual display from pyvirtualdisplay import Display @@ -112,94 +134,97 @@ virtual_display = Display(visible=0, size=(1400, 900)) virtual_display.start() ``` -## Clone RL-Baselines3 Zoo Repo ๐Ÿ“š -You could directly install from the Python package (`pip install rl_zoo3`), but since we want **the full installation with extra environments and dependencies**, we're going to clone the `RL-Baselines3-Zoo` repository and install from source. - -```bash -git clone https://github.com/DLR-RM/rl-baselines3-zoo -``` - -## Install dependencies ๐Ÿ”ฝ -We can now install the dependencies RL-Baselines3 Zoo needs (this can take 5min โฒ) - -```bash -cd /content/rl-baselines3-zoo/ -``` - -```bash -pip install setuptools==65.5.0 -pip install -r requirements.txt -# Since colab uses Python 3.9 we need to add this installation -pip install gym[atari,accept-rom-license]==0.21.0 -``` - ## Train our Deep Q-Learning Agent to Play Space Invaders ๐Ÿ‘พ To train an agent with RL-Baselines3-Zoo, we just need to do two things: -1. We define the hyperparameters in `/content/rl-baselines3-zoo/hyperparams/dqn.yml` -DQN Hyperparameters +1. Create a hyperparameter config file that will contain our training hyperparameters called `dqn.yml`. +This is a template example: + +``` +SpaceInvadersNoFrameskip-v4: + env_wrapper: + - stable_baselines3.common.atari_wrappers.AtariWrapper + frame_stack: 4 + policy: 'CnnPolicy' + n_timesteps: !!float 1e7 + buffer_size: 100000 + learning_rate: !!float 1e-4 + batch_size: 32 + learning_starts: 100000 + target_update_interval: 1000 + train_freq: 4 + gradient_steps: 1 + exploration_fraction: 0.1 + exploration_final_eps: 0.01 + # If True, you need to deactivate handle_timeout_termination + # in the replay_buffer_kwargs + optimize_memory_usage: False +``` Here we see that: -- We use the `Atari Wrapper` that does the pre-processing (Frame reduction, grayscale, stack four frames), -- We use the `CnnPolicy`, since we use Convolutional layers to process the frames. -- We train the model for 10 million `n_timesteps`. -- Memory (Experience Replay) size is 100000, i.e. the number of experience steps you saved to train again your agent with. +- We use the `Atari Wrapper` that preprocess the input (Frame reduction ,grayscale, stack 4 frames) +- We use `CnnPolicy`, since we use Convolutional layers to process the frames +- We train it for 10 million `n_timesteps` +- Memory (Experience Replay) size is 100000, aka the amount of experience steps you saved to train again your agent with. -๐Ÿ’ก My advice is to **reduce the training timesteps to 1M,** which will take about 90 minutes on a P100. `!nvidia-smi` will tell you what GPU you're using. At 10 million steps, this will take about 9 hours, which could likely result in Colab timing out. I recommend running this on your local computer (or somewhere else). Just click on: `File>Download`. +๐Ÿ’ก My advice is to **reduce the training timesteps to 1M,** which will take about 90 minutes on a P100. `!nvidia-smi` will tell you what GPU you're using. At 10 million steps, this will take about 9 hours. I recommend running this on your local computer (or somewhere else). Just click on: `File>Download`. In terms of hyperparameters optimization, my advice is to focus on these 3 hyperparameters: - `learning_rate` - `buffer_size (Experience Memory size)` - `batch_size` -As a good practice, you need to **check the documentation to understand what each hyperparameter does**: https://stable-baselines3.readthedocs.io/en/master/modules/dqn.html#parameters +As a good practice, you need to **check the documentation to understand what each hyperparameters does**: https://stable-baselines3.readthedocs.io/en/master/modules/dqn.html#parameters -2. We run `train.py` and save the models on `logs` folder ๐Ÿ“ +2. We start the training and save the models on `logs` folder ๐Ÿ“ + +- Define the algorithm after `--algo`, where we save the model after `-f` and where the hyperparameter config is after `-c`. ```bash -python train.py --algo ________ --env SpaceInvadersNoFrameskip-v4 -f _________ +python -m rl_zoo3.train --algo ________ --env SpaceInvadersNoFrameskip-v4 -f _________ -c _________ ``` #### Solution ```bash -python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ +python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -c dqn.yml ``` ## Let's evaluate our agent ๐Ÿ‘€ + - RL-Baselines3-Zoo provides `enjoy.py`, a python script to evaluate our agent. In most RL libraries, we call the evaluation script `enjoy.py`. - Let's evaluate it for 5000 timesteps ๐Ÿ”ฅ ```bash -python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 --no-render --n-timesteps _________ --folder logs/ +python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 --no-render --n-timesteps _________ --folder logs/ ``` #### Solution ```bash -python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 --no-render --n-timesteps 5000 --folder logs/ +python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 --no-render --n-timesteps 5000 --folder logs/ ``` ## Publish our trained model on the Hub ๐Ÿš€ -Now that we saw we got good results after the training, we can publish our trained model to the Hub with one line of code. +Now that we saw we got good results after the training, we can publish our trained model on the hub ๐Ÿค— with one line of code. Space Invaders model -By using `rl_zoo3.push_to_hub.py`, **you evaluate, record a replay, generate a model card of your agent, and push it to the Hub**. +By using `rl_zoo3.push_to_hub` **you evaluate, record a replay, generate a model card of your agent and push it to the hub**. This way: -- You can **showcase your work** ๐Ÿ”ฅ +- You can **showcase our work** ๐Ÿ”ฅ - You can **visualize your agent playing** ๐Ÿ‘€ -- You can **share an agent with the community that others can use** ๐Ÿ’พ +- You can **share with the community an agent that others can use** ๐Ÿ’พ - You can **access a leaderboard ๐Ÿ† to see how well your agent is performing compared to your classmates** ๐Ÿ‘‰ https://huggingface.co/spaces/huggingface-projects/Deep-Reinforcement-Learning-Leaderboard -To be able to share your model with the community, there are three more steps to follow: +To be able to share your model with the community there are three more steps to follow: -1๏ธโƒฃ (If it's not already done) create an account in HF โžก https://huggingface.co/join +1๏ธโƒฃ (If it's not already done) create an account to HF โžก https://huggingface.co/join 2๏ธโƒฃ Sign in and then, you need to store your authentication token from the Hugging Face website. - Create a new token (https://huggingface.co/settings/tokens) **with write role** @@ -209,20 +234,23 @@ To be able to share your model with the community, there are three more steps to - Copy the token - Run the cell below and past the token -```python +```bash from huggingface_hub import notebook_login # To log to our Hugging Face account to be able to upload models to the Hub. notebook_login() -git config --global credential.helper store +!git config --global credential.helper store ``` -If you don't want to use Google Colab or a Jupyter Notebook, you need to use this command instead: `huggingface-cli login` +If you don't want to use a Google Colab or a Jupyter Notebook, you need to use this command instead: `huggingface-cli login` -3๏ธโƒฃ We're now ready to push our trained agent to the Hub ๐Ÿ”ฅ +3๏ธโƒฃ We're now ready to push our trained agent to the ๐Ÿค— Hub ๐Ÿ”ฅ -Let's run the `push_to_hub.py` file to upload our trained agent to the Hub. There are two important parameters: +Let's run push_to_hub.py file to upload our trained agent to the Hub. -* `--repo-name `: The name of the repo -* `-orga`: Your Hugging Face username +`--repo-name `: The name of the repo + +`-orga`: Your Hugging Face username + +`-f`: Where the trained model folder is (in our case `logs`) Select Id @@ -236,6 +264,8 @@ python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -- python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 --repo-name dqn-SpaceInvadersNoFrameskip-v4 -orga ThomasSimonini -f logs/ ``` +###. + Congrats ๐Ÿฅณ you've just trained and uploaded your first Deep Q-Learning agent using RL-Baselines-3 Zoo. The script above should have displayed a link to a model repository such as https://huggingface.co/ThomasSimonini/dqn-SpaceInvadersNoFrameskip-v4. When you go to this link, you can: - See a **video preview of your agent** at the right. @@ -249,7 +279,7 @@ Under the hood, the Hub uses git-based repositories (don't worry if you don't kn ## Load a powerful trained model ๐Ÿ”ฅ -The Stable-Baselines3 team uploaded **more than 150 trained Deep Reinforcement Learning agents on the Hub**. You can download them and use them to see how they perform! +- The Stable-Baselines3 team uploaded **more than 150 trained Deep Reinforcement Learning agents on the Hub**. You can find them here: ๐Ÿ‘‰ https://huggingface.co/sb3 @@ -261,10 +291,6 @@ Some examples: Let's load an agent playing Beam Rider: https://huggingface.co/sb3/dqn-BeamRiderNoFrameskip-v4 -```python - -``` - 1. We download the model using `rl_zoo3.load_from_hub`, and place it in a new folder that we can call `rl_trained` ```bash @@ -275,19 +301,19 @@ python -m rl_zoo3.load_from_hub --algo dqn --env BeamRiderNoFrameskip-v4 -orga s 2. Let's evaluate if for 5000 timesteps ```bash -python enjoy.py --algo dqn --env BeamRiderNoFrameskip-v4 -n 5000 -f rl_trained/ +python -m rl_zoo3.enjoy --algo dqn --env BeamRiderNoFrameskip-v4 -n 5000 -f rl_trained/ --no-render ``` -Why not try training your own **Deep Q-Learning Agent playing BeamRiderNoFrameskip-v4? ๐Ÿ†.** +Why not trying to train your own **Deep Q-Learning Agent playing BeamRiderNoFrameskip-v4? ๐Ÿ†.** -If you want to try, check out https://huggingface.co/sb3/dqn-BeamRiderNoFrameskip-v4#hyperparameters. There, **in the model card, you'll find the hyperparameters of the trained agent.** +If you want to try, check https://huggingface.co/sb3/dqn-BeamRiderNoFrameskip-v4#hyperparameters **in the model card, you have the hyperparameters of the trained agent.** -Finding hyperparameters in general can be a daunting task. Fortunately, we'll see in the next bonus Unit how we can **use Optuna for optimizing the Hyperparameters ๐Ÿ”ฅ.** +But finding hyperparameters can be a daunting task. Fortunately, we'll see in the next Unit, how we can **use Optuna for optimizing the Hyperparameters ๐Ÿ”ฅ.** ## Some additional challenges ๐Ÿ† -The best way to learn **is to try things on your own**! +The best way to learn **is to try things by your own**! In the [Leaderboard](https://huggingface.co/spaces/huggingface-projects/Deep-Reinforcement-Learning-Leaderboard) you will find your agents. Can you get to the top? @@ -297,18 +323,25 @@ Here's a list of environments you can try to train your agent with: - EnduroNoFrameskip-v4 - PongNoFrameskip-v4 -Also, **if you want to learn to implement Deep Q-Learning by yourself**, you definitely should look at the CleanRL implementation: https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/dqn_atari.py +Also, **if you want to learn to implement Deep Q-Learning by yourself**, you definitely should look at CleanRL implementation: https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/dqn_atari.py Environments ________________________________________________________________________ Congrats on finishing this chapter! -If youโ€™re still feel confused with all these elements...it's totally normal! **This was the same for me and for all people who study RL.** +If youโ€™re still feel confused with all these elements...it's totally normal! **This was the same for me and for all people who studied RL.** -Take time to really **grasp the material before continuing and try the additional challenges**. Itโ€™s important to master these elements and have a solid foundations. +Take time to really **grasp the material before continuing and try the additional challenges**. Itโ€™s important to master these elements and having a solid foundations. -In the next unit, **weโ€™re going to learn about [Optuna](https://optuna.org/)**. One of the most critical tasks in Deep Reinforcement Learning is to find a good set of training hyperparameters. And Optuna is a library that helps you to automate the search. +In the next unit, **weโ€™re going to learn about [Optuna](https://optuna.org/)**. One of the most critical task in Deep Reinforcement Learning is to find a good set of training hyperparameters. And Optuna is a library that helps you to automate the search. + + +### This is a course built with you ๐Ÿ‘ท๐Ÿฟโ€โ™€๏ธ + +Finally, we want to improve and update the course iteratively with your feedback. If you have some, please fill this form ๐Ÿ‘‰ https://forms.gle/3HgA7bEHwAmmLfwh9 + +We're constantly trying to improve our tutorials, so **if you find some issues in this notebook**, please [open an issue on the Github Repo](https://github.com/huggingface/deep-rl-class/issues). See you on Bonus unit 2! ๐Ÿ”ฅ