mirror of
https://github.com/huggingface/deep-rl-class.git
synced 2026-02-13 07:05:04 +08:00
536 lines
923 KiB
Plaintext
536 lines
923 KiB
Plaintext
{
|
||
"nbformat": 4,
|
||
"nbformat_minor": 0,
|
||
"metadata": {
|
||
"colab": {
|
||
"name": "unit7.ipynb",
|
||
"provenance": [],
|
||
"collapsed_sections": [],
|
||
"private_outputs": true,
|
||
"authorship_tag": "ABX9TyP0xA9xdlY2GM5JM+jt/BRQ",
|
||
"include_colab_link": true
|
||
},
|
||
"kernelspec": {
|
||
"name": "python3",
|
||
"display_name": "Python 3"
|
||
},
|
||
"language_info": {
|
||
"name": "python"
|
||
},
|
||
"accelerator": "GPU",
|
||
"gpuClass": "standard"
|
||
},
|
||
"cells": [
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "view-in-github",
|
||
"colab_type": "text"
|
||
},
|
||
"source": [
|
||
"<a href=\"https://colab.research.google.com/github/huggingface/deep-rl-class/blob/main/unit7/unit7.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"source": [
|
||
"# Unit 7: Advantage Actor Critic (A2C) using Robotics Simulations with PyBullet 🤖\n",
|
||
"In this small notebook you'll learn to use A2C with PyBullet. And train an agent to walk. More precisely a spider (they say Ant but come on... it's a spider 😆) 🕸️\n",
|
||
"\n",
|
||
"❓ If you have questions, please post them on #study-group discord channel 👉 https://discord.gg/aYka4Yhff9\n",
|
||
"\n",
|
||
"🎮 Environments: \n",
|
||
"- `AntBulletEnv-v0` 🕸️\n",
|
||
"\n",
|
||
"⬇️ Here is an example of what **you will achieve in just a few minutes.** ⬇️"
|
||
],
|
||
"metadata": {
|
||
"id": "-PTReiOw-RAN"
|
||
}
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"source": [
|
||
"%%html\n",
|
||
"<video controls autoplay><source src=\"https://huggingface.co/ThomasSimonini/ppo-AntBulletEnv-v0/resolve/main/replay.mp4\" type=\"video/mp4\"></video>"
|
||
],
|
||
"metadata": {
|
||
"id": "SvCMOt-vNJ91"
|
||
},
|
||
"execution_count": null,
|
||
"outputs": []
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"source": [
|
||
"💡 We advise you to use Google Colab since some environments work only with Ubuntu. The free version of Google Colab is perfect for this tutorial. Let's get started 🚀"
|
||
],
|
||
"metadata": {
|
||
"id": "XhKgm80b_GNc"
|
||
}
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"source": [
|
||
"## This notebook is from Deep Reinforcement Learning Class\n",
|
||
"\n",
|
||
""
|
||
],
|
||
"metadata": {
|
||
"id": "ukt3w2H81D5-"
|
||
}
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"source": [
|
||
"In this free course, you will:\n",
|
||
"\n",
|
||
"- 📖 Study Deep Reinforcement Learning in **theory and practice**.\n",
|
||
"- 🧑💻 Learn to **use famous Deep RL libraries** such as Stable Baselines3, RL Baselines3 Zoo, and RLlib.\n",
|
||
"- 🤖 Train **agents in unique environments** \n",
|
||
"\n",
|
||
"And more check 📚 the syllabus 👉 https://github.com/huggingface/deep-rl-class\n",
|
||
"\n",
|
||
"The best way to keep in touch is to join our discord server to exchange with the community and with us 👉🏻 https://discord.gg/aYka4Yhff9"
|
||
],
|
||
"metadata": {
|
||
"id": "MzUiibM-1Gp-"
|
||
}
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"source": [
|
||
"## Prerequisites 🏗️\n",
|
||
"Before diving into the notebook, you need to:\n",
|
||
"\n",
|
||
"🔲 📚 [Read the Unit 7 Readme](https://github.com/huggingface/deep-rl-class/blob/main/unit7/README.md) contains all the information.\n",
|
||
"\n",
|
||
"🔲 📚 **Study Advantage Actor Critic (A2C)** by reading the chapter 👉 https://huggingface.co/blog/deep-rl-a2c "
|
||
],
|
||
"metadata": {
|
||
"id": "OK2MWG8n1M6d"
|
||
}
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"source": [
|
||
"### Step 0: Set the GPU 💪\n",
|
||
"- To **faster the agent's training, we'll use a GPU** to do that go to `Runtime > Change Runtime type`\n",
|
||
"\n"
|
||
],
|
||
"metadata": {
|
||
"id": "cIgQBndQ1WTf"
|
||
}
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"source": [
|
||
"- `Hardware Accelerator > GPU`"
|
||
],
|
||
"metadata": {
|
||
"id": "g9o-rtbB1Wrb"
|
||
}
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"source": [
|
||
""
|
||
],
|
||
"metadata": {
|
||
"id": "i9-uO83e1aRI"
|
||
}
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"source": [
|
||
"### Install dependencies 🔽\n",
|
||
"The first step is to install the dependencies, we’ll install multiple ones:\n",
|
||
"\n",
|
||
"- `pybullet`: Contains the `AntBullet` environment 🚶\n",
|
||
"- `stable-baselines3[extra]`: The deep reinforcement learning library.\n",
|
||
"- `huggingface_sb3`: Additional code for Stable-baselines3 to load and upload models from the Hugging Face 🤗 Hub.\n",
|
||
"- `huggingface_hub`: Library allowing anyone to work with the Hub repositories."
|
||
],
|
||
"metadata": {
|
||
"id": "e1obkbdJ_KnG"
|
||
}
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "2yZRi_0bQGPM"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"!pip install pybullet\n",
|
||
"!pip install stable-baselines3[extra]\n",
|
||
"!pip install huggingface_sb3\n",
|
||
"!pip install huggingface_hub"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"source": [
|
||
"### Step 2: Import the packages 📦"
|
||
],
|
||
"metadata": {
|
||
"id": "QTep3PQQABLr"
|
||
}
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"source": [
|
||
"import gym\n",
|
||
"import pybullet_envs\n",
|
||
"\n",
|
||
"import os\n",
|
||
"\n",
|
||
"from huggingface_sb3 import load_from_hub, package_to_hub\n",
|
||
"\n",
|
||
"from stable_baselines3 import A2C\n",
|
||
"from stable_baselines3.common.evaluation import evaluate_policy\n",
|
||
"from stable_baselines3.common.vec_env import DummyVecEnv, VecNormalize\n",
|
||
"from stable_baselines3.common.env_util import make_vec_env\n",
|
||
"\n",
|
||
"from huggingface_hub import notebook_login\n",
|
||
"\n",
|
||
"import torch \n",
|
||
"from torch import nn"
|
||
],
|
||
"metadata": {
|
||
"id": "HpiB8VdnQ7Bk"
|
||
},
|
||
"execution_count": null,
|
||
"outputs": []
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"source": [
|
||
"### Step 3: Create the AntBulletEnv-v0 🕸\n",
|
||
"#### The environment 🎮\n",
|
||
"In this environment, the agent needs to use correctly its different joints to walk correctly."
|
||
],
|
||
"metadata": {
|
||
"id": "frVXOrnlBerQ"
|
||
}
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"source": [
|
||
"env_id = \"AntBulletEnv-v0\"\n",
|
||
"# Create the env\n",
|
||
"env = gym.make(env_id)\n",
|
||
"\n",
|
||
"# Get the state space and action space\n",
|
||
"s_size = env.observation_space.shape[0]\n",
|
||
"a_size = env.action_space"
|
||
],
|
||
"metadata": {
|
||
"id": "JpU-JCDQYYax"
|
||
},
|
||
"execution_count": null,
|
||
"outputs": []
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"source": [
|
||
"print(\"_____OBSERVATION SPACE_____ \\n\")\n",
|
||
"print(\"The State Space is: \", s_size)\n",
|
||
"print(\"Sample observation\", env.observation_space.sample()) # Get a random observation"
|
||
],
|
||
"metadata": {
|
||
"id": "2ZfvcCqEYgrg"
|
||
},
|
||
"execution_count": null,
|
||
"outputs": []
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"source": [
|
||
"print(\"\\n _____ACTION SPACE_____ \\n\")\n",
|
||
"print(\"The Action Space is: \", a_size)\n",
|
||
"print(\"Action Space Sample\", env.action_space.sample()) # Take a random action"
|
||
],
|
||
"metadata": {
|
||
"id": "Tc89eLTYYkK2"
|
||
},
|
||
"execution_count": null,
|
||
"outputs": []
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"source": [
|
||
"We need to [normalize input features](https://stable-baselines3.readthedocs.io/en/master/guide/rl_tips.html) For that, a wrapper exists and will compute a running average and standard deviation of input features.\n",
|
||
"\n",
|
||
"We also normalize rewards with this same wrapper by adding `norm_reward = True`"
|
||
],
|
||
"metadata": {
|
||
"id": "1ZyX6qf3Zva9"
|
||
}
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"source": [
|
||
"env = make_vec_env(env_id, n_envs=4)\n",
|
||
"\n",
|
||
"# Adding this wrapper to normalize the observation and the reward\n",
|
||
"env = VecNormalize(env, norm_obs=True, norm_reward=False, clip_obs=10.)"
|
||
],
|
||
"metadata": {
|
||
"id": "1RsDtHHAQ9Ie"
|
||
},
|
||
"execution_count": null,
|
||
"outputs": []
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"source": [
|
||
"## Step 4: Create the A2C Model 🤖\n",
|
||
"\n",
|
||
"In this case, because we have a vector as input, we'll use an MLP (multi-layer perceptron) as policy.\n",
|
||
"\n",
|
||
"To find the best parameters I checked the [official trained agents by Stable-Baselines3 team](https://huggingface.co/sb3)."
|
||
],
|
||
"metadata": {
|
||
"id": "4JmEVU6z1ZA-"
|
||
}
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"source": [
|
||
"model = A2C(policy = \"MlpPolicy\",\n",
|
||
" env = env,\n",
|
||
" gae_lambda = 0.9,\n",
|
||
" gamma = 0.99,\n",
|
||
" learning_rate = 0.00096,\n",
|
||
" max_grad_norm = 0.5,\n",
|
||
" n_steps = 8,\n",
|
||
" vf_coef = 0.4,\n",
|
||
" ent_coef = 0.0,\n",
|
||
" tensorboard_log = \"./tensorboard\",\n",
|
||
" policy_kwargs=dict(\n",
|
||
" log_std_init=-2, ortho_init=False),\n",
|
||
" normalize_advantage=False,\n",
|
||
" use_rms_prop= True,\n",
|
||
" use_sde= True,\n",
|
||
" verbose=1)"
|
||
],
|
||
"metadata": {
|
||
"id": "vR3T4qFt164I"
|
||
},
|
||
"execution_count": null,
|
||
"outputs": []
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"source": [
|
||
"### Step 5: Train the A2C agent 🏃\n",
|
||
"- Let's train our agent for 2,000,000 timesteps, don't forget to use GPU on Colab. It will take approximately ~25-40min"
|
||
],
|
||
"metadata": {
|
||
"id": "opyK3mpJ1-m9"
|
||
}
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"source": [
|
||
"model.learn(2_000_000)"
|
||
],
|
||
"metadata": {
|
||
"id": "4TuGHZD7RF1G"
|
||
},
|
||
"execution_count": null,
|
||
"outputs": []
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"source": [
|
||
"# Save the model and VecNormalize statistics when saving the agent\n",
|
||
"model.save(\"a2c-AntBulletEnv-v0\")\n",
|
||
"env.save(\"vec_normalize.pkl\")"
|
||
],
|
||
"metadata": {
|
||
"id": "MfYtjj19cKFr"
|
||
},
|
||
"execution_count": null,
|
||
"outputs": []
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"source": [
|
||
"### Step 6: Evaluate the agent 📈\n",
|
||
"- Now that's our agent is trained, we need to **check its performance**.\n",
|
||
"- Stable-Baselines3 provides a method to do that `evaluate_policy`\n",
|
||
"- In this case, we see that's the mean reward is `2371.90 +/- 16.50`"
|
||
],
|
||
"metadata": {
|
||
"id": "01M9GCd32Ig-"
|
||
}
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"source": [
|
||
"from stable_baselines3.common.vec_env import DummyVecEnv, VecNormalize\n",
|
||
"\n",
|
||
"# Load the saved statistics\n",
|
||
"eval_env = DummyVecEnv([lambda: gym.make(\"AntBulletEnv-v0\")])\n",
|
||
"eval_env = VecNormalize.load(\"vec_normalize.pkl\", eval_env)\n",
|
||
"\n",
|
||
"# do not update them at test time\n",
|
||
"eval_env.training = False\n",
|
||
"# reward normalization is not needed at test time\n",
|
||
"eval_env.norm_reward = False\n",
|
||
"\n",
|
||
"# Load the agent\n",
|
||
"model = A2C.load(\"a2c-AntBulletEnv-v0\")\n",
|
||
"\n",
|
||
"mean_reward, std_reward = evaluate_policy(model, env)\n",
|
||
"\n",
|
||
"print(f\"Mean reward = {mean_reward:.2f} +/- {std_reward:.2f}\")"
|
||
],
|
||
"metadata": {
|
||
"id": "liirTVoDkHq3"
|
||
},
|
||
"execution_count": null,
|
||
"outputs": []
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"source": [
|
||
"### Step 7: Publish our trained model on the Hub 🔥\n",
|
||
"Now that we saw we got good results after the training, we can publish our trained model on the hub 🤗 with one line of code.\n",
|
||
"\n",
|
||
"Here's an example of a Model Card:"
|
||
],
|
||
"metadata": {
|
||
"id": "44L9LVQaavR8"
|
||
}
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"source": [
|
||
""
|
||
],
|
||
"metadata": {
|
||
"id": "Ul-eUa-xazBm"
|
||
}
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"source": [
|
||
"Under the hood, the Hub uses git-based repositories (don't worry if you don't know what git is), which means you can update the model with new versions as you experiment and improve your agent."
|
||
],
|
||
"metadata": {
|
||
"id": "oJ3YqEgwbd4Y"
|
||
}
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"source": [
|
||
"By using `package_to_hub` **you evaluate, record a replay, generate a model card of your agent and push it to the hub**.\n",
|
||
"\n",
|
||
"This way:\n",
|
||
"- You can **showcase our work** 🔥\n",
|
||
"- You can **visualize your agent playing** 👀\n",
|
||
"- You can **share with the community an agent that others can use** 💾\n",
|
||
"- You can **access a leaderboard 🏆 to see how well your agent is performing compared to your classmates** 👉 https://huggingface.co/spaces/chrisjay/Deep-Reinforcement-Learning-Leaderboard"
|
||
],
|
||
"metadata": {
|
||
"id": "MkMk99m8bgaQ"
|
||
}
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"source": [
|
||
"To be able to share your model with the community there are three more steps to follow:\n",
|
||
"\n",
|
||
"1️⃣ (If it's not already done) create an account to HF ➡ https://huggingface.co/join\n",
|
||
"\n",
|
||
"2️⃣ Sign in and then, you need to store your authentication token from the Hugging Face website.\n",
|
||
"- Create a new token (https://huggingface.co/settings/tokens) **with write role**"
|
||
],
|
||
"metadata": {
|
||
"id": "osyjFCM3bhQv"
|
||
}
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"source": [
|
||
""
|
||
],
|
||
"metadata": {
|
||
"id": "gXtpU42vbjTa"
|
||
}
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"source": [
|
||
"from huggingface_hub import notebook_login\n",
|
||
"notebook_login()"
|
||
],
|
||
"metadata": {
|
||
"id": "zHIVtwpnbmU6"
|
||
},
|
||
"execution_count": null,
|
||
"outputs": []
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"source": [
|
||
"If you don't want to use a Google Colab or a Jupyter Notebook, you need to use this command instead: `huggingface-cli login`\n",
|
||
"\n",
|
||
"3️⃣ We're now ready to push our trained agent to the 🤗 Hub 🔥 using `package_to_hub()` function"
|
||
],
|
||
"metadata": {
|
||
"id": "BTdZMDfjbkrC"
|
||
}
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"source": [
|
||
"package_to_hub(\n",
|
||
" model=model,\n",
|
||
" model_name=f\"a2c-{env_id}\",\n",
|
||
" model_architecture=\"A2C\",\n",
|
||
" env_id=env_id,\n",
|
||
" eval_env=eval_env,\n",
|
||
" repo_id=f\"ThomasSimonini/a2c-{env_id}\",\n",
|
||
" commit_message=\"Initial commit\",\n",
|
||
")"
|
||
],
|
||
"metadata": {
|
||
"id": "ueuzWVCUTkfS"
|
||
},
|
||
"execution_count": null,
|
||
"outputs": []
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"source": [
|
||
"## Some additional challenges 🏆\n",
|
||
"The best way to learn **is to try things by your own**! Why not trying `HalfCheetahBulletEnv-v0`?\n",
|
||
"\n",
|
||
"In the [Leaderboard](https://huggingface.co/spaces/chrisjay/Deep-Reinforcement-Learning-Leaderboard) you will find your agents. Can you get to the top?\n",
|
||
"\n",
|
||
"Here are some ideas to achieve so:\n",
|
||
"* Train more steps\n",
|
||
"* Try different hyperparameters by looking at what your classmates have done 👉 https://huggingface.co/models?other=https://huggingface.co/models?other=AntBulletEnv-v0\n",
|
||
"* **Push your new trained model** on the Hub 🔥\n"
|
||
],
|
||
"metadata": {
|
||
"id": "G3xy3Nf3c2O1"
|
||
}
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"source": [
|
||
"See you on Unit 8! 🔥\n",
|
||
"## Keep learning, stay awesome 🤗"
|
||
],
|
||
"metadata": {
|
||
"id": "usatLaZ8dM4P"
|
||
}
|
||
}
|
||
]
|
||
} |