mirror of
https://github.com/huggingface/deep-rl-class.git
synced 2026-02-12 22:55:03 +08:00
1559 lines
4.1 MiB
1559 lines
4.1 MiB
{
|
||
"cells": [
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"colab_type": "text",
|
||
"id": "view-in-github"
|
||
},
|
||
"source": [
|
||
"<a href=\"https://colab.research.google.com/github/huggingface/deep-rl-class/blob/main/unit5/unit5.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "CjRWziAVU2lZ"
|
||
},
|
||
"source": [
|
||
"# Unit 5: Code your first Deep Reinforcement Learning Algorithm with PyTorch: Reinforce. And test its robustness 💪\n",
|
||
"In this notebook, you'll code your first Deep Reinforcement Learning algorithm from scratch: Reinforce (also called Monte Carlo Policy Gradient).\n",
|
||
"\n",
|
||
"Reinforce is a *Policy-Based Method*: a Deep Reinforcement Learning algorithm that tries **to optimize the policy directly without using an action-value function**.\n",
|
||
"More precisely, Reinforce is a *Policy-Gradient Method*, a subclass of *Policy-Based Methods* that aims **to optimize the policy directly by estimating the weights of the optimal policy using Gradient Ascent**.\n",
|
||
"\n",
|
||
"To test its robustness, we're going to train it in 3 different simple environments:\n",
|
||
"- Cartpole-v1\n",
|
||
"- PongEnv\n",
|
||
"- PixelcopterEnv\n",
|
||
"\n",
|
||
"❓ If you have questions, please post them on #study-group-unit1 discord channel 👉 https://discord.gg/aYka4Yhff9\n",
|
||
"\n",
|
||
"🎮 Environments: \n",
|
||
"- [CartPole-v1](https://www.gymlibrary.dev/environments/classic_control/cart_pole/)\n",
|
||
"- [PixelCopter](https://pygame-learning-environment.readthedocs.io/en/latest/user/games/pixelcopter.html)\n",
|
||
"- [Pong](https://pygame-learning-environment.readthedocs.io/en/latest/user/games/pong.html)\n",
|
||
"\n",
|
||
"⬇️ Here is an example of what **you will achieve in just a few minutes.** ⬇️"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "fzZQhLnOa_uw"
|
||
},
|
||
"source": [
|
||
""
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "L_WSo0VUV99t"
|
||
},
|
||
"source": [
|
||
"## Objectives of this notebook 🏆\n",
|
||
"At the end of the notebook, you will:\n",
|
||
"- Be able to **code from scratch a Reinforce algorithm using PyTorch.**\n",
|
||
"- Be able to **test the robustness of your agent using simple environments.**\n",
|
||
"- Be able to **push your trained agent to the Hub** with a nice video replay and an evaluation score 🔥."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "lEPrZg2eWa4R"
|
||
},
|
||
"source": [
|
||
"## This notebook is from Deep Reinforcement Learning Class\n",
|
||
"\n",
|
||
""
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "l4Q9cfIMWfDp"
|
||
},
|
||
"source": [
|
||
"In this free course, you will:\n",
|
||
"\n",
|
||
"- 📖 Study Deep Reinforcement Learning in **theory and practice**.\n",
|
||
"- 🧑💻 Learn to **use famous Deep RL libraries** such as Stable Baselines3, RL Baselines3 Zoo, and RLlib.\n",
|
||
"- 🤖 Train **agents in unique environments** \n",
|
||
"\n",
|
||
"And more check 📚 the syllabus 👉 https://github.com/huggingface/deep-rl-class\n",
|
||
"\n",
|
||
"The best way to keep in touch is to join our discord server to exchange with the community and with us 👉🏻 https://discord.gg/aYka4Yhff9"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "mjY-eq3eWh9O"
|
||
},
|
||
"source": [
|
||
"## Prerequisites 🏗️\n",
|
||
"Before diving into the notebook, you need to:\n",
|
||
"\n",
|
||
"🔲 📚 [Read the Unit 4 Readme](https://github.com/huggingface/deep-rl-class/blob/main/unit4/README.md) contains all the information.\n",
|
||
"\n",
|
||
"🔲 📚 **Study Policy Gradients** by reading the chapter 👉 https://huggingface.co/blog/deep-rl-pg "
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "In3-qimhWtS3"
|
||
},
|
||
"source": [
|
||
"### Step 0: Set the GPU 💪\n",
|
||
"- To **faster the agent's training, we'll use a GPU** to do that go to `Runtime > Change Runtime type`\n",
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "lI-VAt97WtyH"
|
||
},
|
||
"source": [
|
||
"- `Hardware Accelerator > GPU`\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "kL5xHyW-WzJu"
|
||
},
|
||
"source": [
|
||
""
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "qvd7_qx_W2fg"
|
||
},
|
||
"source": [
|
||
"During the notebook, we'll need to generate a replay video. To do so, with colab, **we need to have a virtual screen to be able to render the environment** (and thus record the frames). \n",
|
||
"\n",
|
||
"Hence the following cell will install virtual screen libraries and create and run a virtual screen 🖥"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "9MmelgpcW6nr"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"!apt install python-opengl\n",
|
||
"!apt install ffmpeg\n",
|
||
"!apt install xvfb\n",
|
||
"!pip3 install pyvirtualdisplay\n",
|
||
"\n",
|
||
"# Virtual display\n",
|
||
"from pyvirtualdisplay import Display\n",
|
||
"\n",
|
||
"virtual_display = Display(visible=0, size=(500, 500))\n",
|
||
"virtual_display.start()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "tjrLfPFIW8XK"
|
||
},
|
||
"source": [
|
||
"### Step 1: Install dependencies 🔽\n",
|
||
"The first step is to install the dependencies, we’ll install multiple ones:\n",
|
||
"\n",
|
||
"- `gym`\n",
|
||
"- `gym-games`: Extra gym environments made with PyGame.\n",
|
||
"- `huggingface_hub`: 🤗 works as a central place where anyone can share and explore models and datasets. It has versioning, metrics, visualizations and other features that will allow you to easily collaborate with others.\n",
|
||
"\n",
|
||
"You can see here all the Reinforce models available 👉 https://huggingface.co/models?other=reinforce\n",
|
||
"\n",
|
||
"And you can find all the Deep Reinforcement Learning models here 👉 https://huggingface.co/models?pipeline_tag=reinforcement-learning\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "kgxMH5wMXME8"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"!pip install gym\n",
|
||
"!pip install git+https://github.com/ntasfi/PyGame-Learning-Environment.git\n",
|
||
"!pip install git+https://github.com/qlan3/gym-games.git\n",
|
||
"!pip install huggingface_hub\n",
|
||
"\n",
|
||
"!pip install pyyaml==6.0 # avoid key error metadata\n",
|
||
"\n",
|
||
"!pip install pyglet # Virtual Screen"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "AAHAq6RZW3rn"
|
||
},
|
||
"source": [
|
||
"### Step 2: Import the packages 📦\n",
|
||
"In addition to the installed libraries, we also use:\n",
|
||
"\n",
|
||
"- `imageio`: To generate a replay video\n",
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "V8oadoJSWp7C"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"import numpy as np\n",
|
||
"from collections import deque\n",
|
||
"import matplotlib.pyplot as plt\n",
|
||
"%matplotlib inline\n",
|
||
"\n",
|
||
"import torch\n",
|
||
"import torch.nn as nn\n",
|
||
"import torch.nn.functional as F\n",
|
||
"import torch.optim as optim\n",
|
||
"from torch.distributions import Categorical\n",
|
||
"\n",
|
||
"import gym\n",
|
||
"import gym_pygame\n",
|
||
"\n",
|
||
"from huggingface_hub import notebook_login # To log to our Hugging Face account to be able to upload models to the Hub.\n",
|
||
"\n",
|
||
"import imageio"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "hn2Emlm9bXmc"
|
||
},
|
||
"source": [
|
||
"- Let's check if we have a GPU\n",
|
||
"- If it's the case you should see `device:cuda0`"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "kaJu5FeZxXGY"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "U5TNYa14aRav"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"print(device)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "PBPecCtBL_pZ"
|
||
},
|
||
"source": [
|
||
"We're now ready to implement our Reinforce algorithm 🔥"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "8KEyKYo2ZSC-"
|
||
},
|
||
"source": [
|
||
"## First agent: Playing CartPole-v1 🤖"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "haLArKURMyuF"
|
||
},
|
||
"source": [
|
||
"### Step 3: Create the CartPole environment and understand how it works\n",
|
||
"#### [The environment 🎮](https://www.gymlibrary.dev/environments/classic_control/cart_pole/)\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "vVwcV9LjMzQk"
|
||
},
|
||
"source": [
|
||
""
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "AH_TaLKFXo_8"
|
||
},
|
||
"source": [
|
||
"### Why do we use a simple environment like CartPole-v1?\n",
|
||
"As explained in [Reinforcement Learning Tips and Tricks](https://stable-baselines3.readthedocs.io/en/master/guide/rl_tips.html), when you implement your agent from scratch you need **to be sure that it works correctly and find bugs with easy environments before going deeper**. Since finding bugs will be much easier in simple environments.\n",
|
||
"\n",
|
||
"\n",
|
||
"> Try to have some “sign of life” on toy problems\n",
|
||
"\n",
|
||
"\n",
|
||
"> Validate the implementation by making it run on harder and harder envs (you can compare results against the RL zoo). You usually need to run hyperparameter optimization for that step.\n",
|
||
"___\n",
|
||
"#### The CartPole-v1 environment\n",
|
||
"\n",
|
||
"> A pole is attached by an un-actuated joint to a cart, which moves along a frictionless track. The pendulum is placed upright on the cart and the goal is to balance the pole by applying forces in the left and right direction on the cart.\n",
|
||
"\n",
|
||
"\n",
|
||
"\n",
|
||
"So, we start with CartPole-v1. The goal is to push the cart left or right **so that the pole stays in the equilibrium.**\n",
|
||
"\n",
|
||
"The episode ends if:\n",
|
||
"- The pole Angle is greater than ±12°\n",
|
||
"- Cart Position is greater than ±2.4\n",
|
||
"- Episode length is greater than 500\n",
|
||
"\n",
|
||
"We get a reward 💰 of +1 every timestep the Pole stays in the equilibrium."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "POOOk15_K6KA"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"env_id = \"CartPole-v1\"\n",
|
||
"# Create the env\n",
|
||
"env = gym.make(env_id)\n",
|
||
"\n",
|
||
"# Create the evaluation env\n",
|
||
"eval_env = gym.make(env_id)\n",
|
||
"\n",
|
||
"# Get the state space and action space\n",
|
||
"s_size = env.observation_space.shape[0]\n",
|
||
"a_size = env.action_space.n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "FMLFrjiBNLYJ"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"print(\"_____OBSERVATION SPACE_____ \\n\")\n",
|
||
"print(\"The State Space is: \", s_size)\n",
|
||
"print(\"Sample observation\", env.observation_space.sample()) # Get a random observation"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "Lu6t4sRNNWkN"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"print(\"\\n _____ACTION SPACE_____ \\n\")\n",
|
||
"print(\"The Action Space is: \", a_size)\n",
|
||
"print(\"Action Space Sample\", env.action_space.sample()) # Take a random action"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "7SJMJj3WaFOz"
|
||
},
|
||
"source": [
|
||
"### Step 4: Let's build the Reinforce Architecture\n",
|
||
"This implementation is based on two implementations:\n",
|
||
"- [PyTorch official Reinforcement Learning example](https://github.com/pytorch/examples/blob/main/reinforcement_learning/reinforce.py)\n",
|
||
"- [Udacity Reinforce](https://github.com/udacity/deep-reinforcement-learning/blob/master/reinforce/REINFORCE.ipynb)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "_qjopt-_dEjU"
|
||
},
|
||
"source": [
|
||
""
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "49kogtxBODX8"
|
||
},
|
||
"source": [
|
||
"So we want:\n",
|
||
"- Two fully connected layers (fc1 and fc2).\n",
|
||
"- Using ReLU as activation function of fc1\n",
|
||
"- Using Softmax to output a probability distribution over actions"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "w2LHcHhVZvPZ"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"class Policy(nn.Module):\n",
|
||
" def __init__(self, s_size, a_size, h_size):\n",
|
||
" super(Policy, self).__init__()\n",
|
||
" # Create two fully connected layers\n",
|
||
"\n",
|
||
"\n",
|
||
"\n",
|
||
" def forward(self, x):\n",
|
||
" # Define the forward pass\n",
|
||
" # state goes to fc1 then we apply ReLU activation function\n",
|
||
"\n",
|
||
" # fc1 outputs goes to fc2\n",
|
||
"\n",
|
||
" # We output the softmax\n",
|
||
" \n",
|
||
" def act(self, state):\n",
|
||
" \"\"\"\n",
|
||
" Given a state, take action\n",
|
||
" \"\"\"\n",
|
||
" state = torch.from_numpy(state).float().unsqueeze(0).to(device)\n",
|
||
" probs = self.forward(state).cpu()\n",
|
||
" m = Categorical(probs)\n",
|
||
" action = np.argmax(m)\n",
|
||
" return action.item(), m.log_prob(action)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "rOMrdwSYOWSC"
|
||
},
|
||
"source": [
|
||
"#### Solution"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "jGdhRSVrOV4K"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"class Policy(nn.Module):\n",
|
||
" def __init__(self, s_size, a_size, h_size):\n",
|
||
" super(Policy, self).__init__()\n",
|
||
" self.fc1 = nn.Linear(s_size, h_size)\n",
|
||
" self.fc2 = nn.Linear(h_size, a_size)\n",
|
||
"\n",
|
||
" def forward(self, x):\n",
|
||
" x = F.relu(self.fc1(x))\n",
|
||
" x = self.fc2(x)\n",
|
||
" return F.softmax(x, dim=1)\n",
|
||
" \n",
|
||
" def act(self, state):\n",
|
||
" state = torch.from_numpy(state).float().unsqueeze(0).to(device)\n",
|
||
" probs = self.forward(state).cpu()\n",
|
||
" m = Categorical(probs)\n",
|
||
" action = np.argmax(m)\n",
|
||
" return action.item(), m.log_prob(action)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "ZTGWL4g2eM5B"
|
||
},
|
||
"source": [
|
||
"I make a mistake, can you guess where?\n",
|
||
"\n",
|
||
"- To find out let's make a forward pass:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "lwnqGBCNePor"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"debug_policy = Policy(s_size, a_size, 64).to(device)\n",
|
||
"debug_policy.act(env.reset())"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "14UYkoxCPaor"
|
||
},
|
||
"source": [
|
||
"- Here we see that the error says `ValueError: The value argument to log_prob must be a Tensor`\n",
|
||
"\n",
|
||
"- It means that `action` in `m.log_prob(action)` must be a Tensor **but it's not.**\n",
|
||
"\n",
|
||
"- Do you know why? Check the act function and try to see why it does not work. \n",
|
||
"\n",
|
||
"Advice 💡: Something is wrong in this implementation. Remember that we act function **we want to sample an action from the probability distribution over actions**.\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "gfGJNZBUP7Vn"
|
||
},
|
||
"source": [
|
||
"#### Solution\n",
|
||
"- Since **we want to sample an action from the probability distribution over actions**, we can't use `action = np.argmax(m)` since it will always output the action that have the highest probability.\n",
|
||
"\n",
|
||
"- We need to replace with `action = m.sample()` that will sample an action from the probability distribution P(.|s)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "Ho_UHf49N9i4"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"class Policy(nn.Module):\n",
|
||
" def __init__(self, s_size, a_size, h_size):\n",
|
||
" super(Policy, self).__init__()\n",
|
||
" self.fc1 = nn.Linear(s_size, h_size)\n",
|
||
" self.fc2 = nn.Linear(h_size, a_size)\n",
|
||
"\n",
|
||
" def forward(self, x):\n",
|
||
" x = F.relu(self.fc1(x))\n",
|
||
" x = self.fc2(x)\n",
|
||
" return F.softmax(x, dim=1)\n",
|
||
" \n",
|
||
" def act(self, state):\n",
|
||
" state = torch.from_numpy(state).float().unsqueeze(0).to(device)\n",
|
||
" probs = self.forward(state).cpu()\n",
|
||
" m = Categorical(probs)\n",
|
||
" action = m.sample()\n",
|
||
" return action.item(), m.log_prob(action)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "rgJWQFU_eUYw"
|
||
},
|
||
"source": [
|
||
"By using CartPole, it was easier to debug since **we know that the bug comes from our integration and not from our simple environment**."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "4MXoqetzfIoW"
|
||
},
|
||
"source": [
|
||
"### Step 5: Build the Reinforce Training Algorithm\n",
|
||
"- Contrary to the pseudocode, we update our policy after every episode and **not using a batch of episodes**.\n",
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "BBCqZMvJR57d"
|
||
},
|
||
"source": [
|
||
""
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "O554nUGPpcoq"
|
||
},
|
||
"source": [
|
||
"Why do we minimize the loss? You talked about Gradient Ascent not Gradient Descent?\n",
|
||
"\n",
|
||
"- We want to maximize our utility function $J(\\theta)$ but in PyTorch like in Tensorflow it's better to **minimize an objective function.**\n",
|
||
" - So let's say we want to reinforce action 3 at a certain timestep. Before training this action P is 0.25.\n",
|
||
" - So we want to modify $\\theta$ such that $\\pi_\\theta(a_3|s; \\theta) > 0.25$\n",
|
||
" - Because all P must sum to 1, max $\\pi_\\theta(a_3|s; \\theta)$ will **minimize other action probability.**\n",
|
||
" - So we should tell PyTorch **to min $1 - \\pi_\\theta(a_3|s; \\theta)$.**\n",
|
||
" - This loss function approaches 0 as $\\pi_\\theta(a_3|s; \\theta)$ nears 1.\n",
|
||
" - So we are encouraging the gradient to max $\\pi_\\theta(a_3|s; \\theta)$"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "iOdv8Q9NfLK7"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"def reinforce(policy, optimizer, n_training_episodes, max_t, gamma, print_every):\n",
|
||
" # Help us to calculate the score during the training\n",
|
||
" scores_deque = deque(maxlen=100)\n",
|
||
" scores = []\n",
|
||
" # Line 3 of pseudocode\n",
|
||
" for i_episode in range(1, n_training_episodes+1):\n",
|
||
" saved_log_probs = []\n",
|
||
" rewards = []\n",
|
||
" state = # TODO: reset the environment\n",
|
||
" # Line 4 of pseudocode\n",
|
||
" for t in range(max_t):\n",
|
||
" action, log_prob = # TODO get the action\n",
|
||
" saved_log_probs.append(log_prob)\n",
|
||
" state, reward, done, _ = # TODO: take an env step\n",
|
||
" rewards.append(reward)\n",
|
||
" if done:\n",
|
||
" break \n",
|
||
" scores_deque.append(sum(rewards))\n",
|
||
" scores.append(sum(rewards))\n",
|
||
" \n",
|
||
" # Line 6 of pseudocode: calculate the return\n",
|
||
" ## Here, we calculate discounts for instance [0.99^1, 0.99^2, 0.99^3, ..., 0.99^len(rewards)]\n",
|
||
" discounts = [gamma**i for i in range(len(rewards)+1)]\n",
|
||
" ## We calculate the return by sum(gamma[t] * reward[t]) \n",
|
||
" R = sum([a*b for a,b in zip( , )]) # TODO: what do we need to put in zip() remember that we calculate gamma[t] * reward[t]\n",
|
||
" \n",
|
||
" # Line 7:\n",
|
||
" policy_loss = []\n",
|
||
" for log_prob in saved_log_probs:\n",
|
||
" policy_loss.append(-log_prob * R)\n",
|
||
" policy_loss = torch.cat(policy_loss).sum()\n",
|
||
" \n",
|
||
" # Line 8: PyTorch prefers \n",
|
||
" optimizer.zero_grad()\n",
|
||
" policy_loss.backward()\n",
|
||
" optimizer.step()\n",
|
||
" \n",
|
||
" if i_episode % print_every == 0:\n",
|
||
" print('Episode {}\\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque)))\n",
|
||
" \n",
|
||
" return scores"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "YB0Cxrw1StrP"
|
||
},
|
||
"source": [
|
||
"#### Solution"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "NCNvyElRStWG"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"def reinforce(policy, optimizer, n_training_episodes, max_t, gamma, print_every):\n",
|
||
" # Help us to calculate the score during the training\n",
|
||
" scores_deque = deque(maxlen=100)\n",
|
||
" scores = []\n",
|
||
" # Line 3 of pseudocode\n",
|
||
" for i_episode in range(1, n_training_episodes+1):\n",
|
||
" saved_log_probs = []\n",
|
||
" rewards = []\n",
|
||
" state = env.reset()\n",
|
||
" # Line 4 of pseudocode\n",
|
||
" for t in range(max_t):\n",
|
||
" action, log_prob = policy.act(state)\n",
|
||
" saved_log_probs.append(log_prob)\n",
|
||
" state, reward, done, _ = env.step(action)\n",
|
||
" rewards.append(reward)\n",
|
||
" if done:\n",
|
||
" break \n",
|
||
" scores_deque.append(sum(rewards))\n",
|
||
" scores.append(sum(rewards))\n",
|
||
" \n",
|
||
" # Line 6 of pseudocode: calculate the return\n",
|
||
" ## Here, we calculate discounts for instance [0.99^1, 0.99^2, 0.99^3, ..., 0.99^len(rewards)]\n",
|
||
" discounts = [gamma**i for i in range(len(rewards)+1)]\n",
|
||
" ## We calculate the return by sum(gamma[t] * reward[t]) \n",
|
||
" R = sum([a*b for a,b in zip(discounts, rewards)])\n",
|
||
" \n",
|
||
" # Line 7:\n",
|
||
" policy_loss = []\n",
|
||
" for log_prob in saved_log_probs:\n",
|
||
" policy_loss.append(-log_prob * R)\n",
|
||
" policy_loss = torch.cat(policy_loss).sum()\n",
|
||
" \n",
|
||
" # Line 8:\n",
|
||
" optimizer.zero_grad()\n",
|
||
" policy_loss.backward()\n",
|
||
" optimizer.step()\n",
|
||
" \n",
|
||
" if i_episode % print_every == 0:\n",
|
||
" print('Episode {}\\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque)))\n",
|
||
" \n",
|
||
" return scores"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "RIWhQyJjfpEt"
|
||
},
|
||
"source": [
|
||
"### Train it\n",
|
||
"- We're now ready to train our agent.\n",
|
||
"- But first, we define a variable containing all the training hyperparameters."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "utRe1NgtVBYF"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"cartpole_hyperparameters = {\n",
|
||
" \"h_size\": 16,\n",
|
||
" \"n_training_episodes\": 1000,\n",
|
||
" \"n_evaluation_episodes\": 10,\n",
|
||
" \"max_t\": 1000,\n",
|
||
" \"gamma\": 1.0,\n",
|
||
" \"lr\": 1e-2,\n",
|
||
" \"env_id\": env_id,\n",
|
||
" \"state_space\": s_size,\n",
|
||
" \"action_space\": a_size,\n",
|
||
"}"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "D3lWyVXBVfl6"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"# Create policy and place it to the device\n",
|
||
"cartpole_policy = Policy(cartpole_hyperparameters[\"state_space\"], cartpole_hyperparameters[\"action_space\"], cartpole_hyperparameters[\"h_size\"]).to(device)\n",
|
||
"cartpole_optimizer = optim.Adam(cartpole_policy.parameters(), lr=cartpole_hyperparameters[\"lr\"])"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "uGf-hQCnfouB"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"scores = reinforce(cartpole_policy,\n",
|
||
" cartpole_optimizer,\n",
|
||
" cartpole_hyperparameters[\"n_training_episodes\"], \n",
|
||
" cartpole_hyperparameters[\"max_t\"],\n",
|
||
" cartpole_hyperparameters[\"gamma\"], \n",
|
||
" 100)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "Qajj2kXqhB3g"
|
||
},
|
||
"source": [
|
||
"### Step 6: Define evaluation method 📝"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "3FamHmxyhBEU"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"def evaluate_agent(env, max_steps, n_eval_episodes, policy):\n",
|
||
" \"\"\"\n",
|
||
" Evaluate the agent for ``n_eval_episodes`` episodes and returns average reward and std of reward.\n",
|
||
" :param env: The evaluation environment\n",
|
||
" :param n_eval_episodes: Number of episode to evaluate the agent\n",
|
||
" :param policy: The Reinforce agent\n",
|
||
" \"\"\"\n",
|
||
" episode_rewards = []\n",
|
||
" for episode in range(n_eval_episodes):\n",
|
||
" state = env.reset()\n",
|
||
" step = 0\n",
|
||
" done = False\n",
|
||
" total_rewards_ep = 0\n",
|
||
" \n",
|
||
" for step in range(max_steps):\n",
|
||
" action, _ = policy.act(state)\n",
|
||
" new_state, reward, done, info = env.step(action)\n",
|
||
" total_rewards_ep += reward\n",
|
||
" \n",
|
||
" if done:\n",
|
||
" break\n",
|
||
" state = new_state\n",
|
||
" episode_rewards.append(total_rewards_ep)\n",
|
||
" mean_reward = np.mean(episode_rewards)\n",
|
||
" std_reward = np.std(episode_rewards)\n",
|
||
"\n",
|
||
" return mean_reward, std_reward"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "xdH2QCrLTrlT"
|
||
},
|
||
"source": [
|
||
"### Step 7: Evaluate our agent 📈"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "ohGSXDyHh0xx"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"evaluate_agent(eval_env, \n",
|
||
" cartpole_hyperparameters[\"max_t\"], \n",
|
||
" cartpole_hyperparameters[\"n_evaluation_episodes\"],\n",
|
||
" cartpole_policy)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "7CoeLkQ7TpO8"
|
||
},
|
||
"source": [
|
||
"### Step 8: Publish our trained model on the Hub 🔥\n",
|
||
"Now that we saw we got good results after the training, we can publish our trained model on the hub 🤗 with one line of code.\n",
|
||
"\n",
|
||
"Here's an example of a Model Card:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "0QMzSwgZTz37"
|
||
},
|
||
"source": [
|
||
""
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "oSlbwgzxT0Aq"
|
||
},
|
||
"source": [
|
||
"Under the hood, the Hub uses git-based repositories (don't worry if you don't know what git is), which means you can update the model with new versions as you experiment and improve your agent."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "Jmhs1k-cftIq"
|
||
},
|
||
"source": [
|
||
"### Push to the Hub\n",
|
||
"#### Do not modify this code"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "lX1XKF1lf3I5"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"%%capture\n",
|
||
"from huggingface_hub import HfApi, HfFolder, Repository\n",
|
||
"from huggingface_hub.repocard import metadata_eval_result, metadata_save\n",
|
||
"\n",
|
||
"from pathlib import Path\n",
|
||
"import datetime\n",
|
||
"import json\n",
|
||
"\n",
|
||
"import imageio"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "Lo4JH45if81z"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"def record_video(env, policy, out_directory, fps=30):\n",
|
||
" images = [] \n",
|
||
" done = False\n",
|
||
" state = env.reset()\n",
|
||
" img = env.render(mode='rgb_array')\n",
|
||
" images.append(img)\n",
|
||
" while not done:\n",
|
||
" # Take the action (index) that have the maximum expected future reward given that state\n",
|
||
" action, _ = policy.act(state)\n",
|
||
" state, reward, done, info = env.step(action) # We directly put next_state = state for recording logic\n",
|
||
" img = env.render(mode='rgb_array')\n",
|
||
" images.append(img)\n",
|
||
" imageio.mimsave(out_directory, [np.array(img) for i, img in enumerate(images)], fps=fps)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "D1ywQFrkf3t8"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"import os\n",
|
||
"def package_to_hub(repo_id, \n",
|
||
" model,\n",
|
||
" hyperparameters,\n",
|
||
" eval_env,\n",
|
||
" video_fps=30,\n",
|
||
" local_repo_path=\"hub\",\n",
|
||
" commit_message=\"Push Reinforce agent to the Hub\",\n",
|
||
" token= None\n",
|
||
" ):\n",
|
||
" _, repo_name = repo_id.split(\"/\")\n",
|
||
" \n",
|
||
" # Step 1: Clone or create the repo\n",
|
||
" # Create the repo (or clone its content if it's nonempty)\n",
|
||
" api = HfApi()\n",
|
||
" \n",
|
||
" repo_url = api.create_repo(\n",
|
||
" repo_id=repo_id,\n",
|
||
" token=token,\n",
|
||
" private=False,\n",
|
||
" exist_ok=True,)\n",
|
||
" \n",
|
||
" # Git pull\n",
|
||
" repo_local_path = Path(local_repo_path) / repo_name\n",
|
||
" repo = Repository(repo_local_path, clone_from=repo_url, use_auth_token=True)\n",
|
||
" repo.git_pull()\n",
|
||
" \n",
|
||
" repo.lfs_track([\"*.mp4\"])\n",
|
||
"\n",
|
||
" # Step 1: Save the model\n",
|
||
" torch.save(model, os.path.join(repo_local_path,\"model.pt\"))\n",
|
||
"\n",
|
||
" # Step 2: Save the hyperparameters to JSON\n",
|
||
" with open(Path(repo_local_path) / \"hyperparameters.json\", \"w\") as outfile:\n",
|
||
" json.dump(hyperparameters, outfile)\n",
|
||
" \n",
|
||
" # Step 2: Evaluate the model and build JSON\n",
|
||
" mean_reward, std_reward = evaluate_agent(eval_env, \n",
|
||
" hyperparameters[\"max_t\"],\n",
|
||
" hyperparameters[\"n_evaluation_episodes\"], \n",
|
||
" model)\n",
|
||
"\n",
|
||
" # First get datetime\n",
|
||
" eval_datetime = datetime.datetime.now()\n",
|
||
" eval_form_datetime = eval_datetime.isoformat()\n",
|
||
"\n",
|
||
" evaluate_data = {\n",
|
||
" \"env_id\": hyperparameters[\"env_id\"], \n",
|
||
" \"mean_reward\": mean_reward,\n",
|
||
" \"n_evaluation_episodes\": hyperparameters[\"n_evaluation_episodes\"],\n",
|
||
" \"eval_datetime\": eval_form_datetime,\n",
|
||
" }\n",
|
||
" # Write a JSON file\n",
|
||
" with open(Path(repo_local_path) / \"results.json\", \"w\") as outfile:\n",
|
||
" json.dump(evaluate_data, outfile)\n",
|
||
"\n",
|
||
" # Step 3: Create the model card\n",
|
||
" # Env id\n",
|
||
" env_name = hyperparameters[\"env_id\"]\n",
|
||
" \n",
|
||
" metadata = {}\n",
|
||
" metadata[\"tags\"] = [\n",
|
||
" env_name,\n",
|
||
" \"reinforce\",\n",
|
||
" \"reinforcement-learning\",\n",
|
||
" \"custom-implementation\",\n",
|
||
" \"deep-rl-class\"\n",
|
||
" ]\n",
|
||
"\n",
|
||
" # Add metrics\n",
|
||
" eval = metadata_eval_result(\n",
|
||
" model_pretty_name=repo_name,\n",
|
||
" task_pretty_name=\"reinforcement-learning\",\n",
|
||
" task_id=\"reinforcement-learning\",\n",
|
||
" metrics_pretty_name=\"mean_reward\",\n",
|
||
" metrics_id=\"mean_reward\",\n",
|
||
" metrics_value=f\"{mean_reward:.2f} +/- {std_reward:.2f}\",\n",
|
||
" dataset_pretty_name=env_name,\n",
|
||
" dataset_id=env_name,\n",
|
||
" )\n",
|
||
"\n",
|
||
" # Merges both dictionaries\n",
|
||
" metadata = {**metadata, **eval}\n",
|
||
"\n",
|
||
" model_card = f\"\"\"\n",
|
||
" # **Reinforce** Agent playing **{env_id}**\n",
|
||
" This is a trained model of a **Reinforce** agent playing **{env_id}** .\n",
|
||
" To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5\n",
|
||
" \"\"\"\n",
|
||
"\n",
|
||
" readme_path = repo_local_path / \"README.md\"\n",
|
||
" readme = \"\"\n",
|
||
" if readme_path.exists():\n",
|
||
" with readme_path.open(\"r\", encoding=\"utf8\") as f:\n",
|
||
" readme = f.read()\n",
|
||
" else:\n",
|
||
" readme = model_card\n",
|
||
"\n",
|
||
" with readme_path.open(\"w\", encoding=\"utf-8\") as f:\n",
|
||
" f.write(readme)\n",
|
||
"\n",
|
||
" # Save our metrics to Readme metadata\n",
|
||
" metadata_save(readme_path, metadata)\n",
|
||
"\n",
|
||
" # Step 4: Record a video\n",
|
||
" video_path = repo_local_path / \"replay.mp4\"\n",
|
||
" record_video(env, model, video_path, video_fps)\n",
|
||
" \n",
|
||
" # Push everything to hub\n",
|
||
" print(f\"Pushing repo {repo_name} to the Hugging Face Hub\")\n",
|
||
" repo.push_to_hub(commit_message=commit_message)\n",
|
||
"\n",
|
||
" print(f\"Your model is pushed to the hub. You can view your model here: {repo_url}\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "w17w8CxzoURM"
|
||
},
|
||
"source": [
|
||
"### .\n",
|
||
"By using `package_to_hub` **you evaluate, record a replay, generate a model card of your agent and push it to the hub**.\n",
|
||
"\n",
|
||
"This way:\n",
|
||
"- You can **showcase our work** 🔥\n",
|
||
"- You can **visualize your agent playing** 👀\n",
|
||
"- You can **share with the community an agent that others can use** 💾\n",
|
||
"- You can **access a leaderboard 🏆 to see how well your agent is performing compared to your classmates** 👉 https://huggingface.co/spaces/chrisjay/Deep-Reinforcement-Learning-Leaderboard"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "aS7LKTyJoUx_"
|
||
},
|
||
"source": [
|
||
"To be able to share your model with the community there are three more steps to follow:\n",
|
||
"\n",
|
||
"1️⃣ (If it's not already done) create an account to HF ➡ https://huggingface.co/join\n",
|
||
"\n",
|
||
"2️⃣ Sign in and then, you need to store your authentication token from the Hugging Face website.\n",
|
||
"- Create a new token (https://huggingface.co/settings/tokens) **with write role**"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "4W86z1KKoW36"
|
||
},
|
||
"source": [
|
||
""
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "CxKotVK2oZ3C"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"from huggingface_hub import notebook_login\n",
|
||
"notebook_login()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "_TjHokOPobkg"
|
||
},
|
||
"source": [
|
||
"If you don't want to use a Google Colab or a Jupyter Notebook, you need to use this command instead: `huggingface-cli login`"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "F-D-zhbRoeOm"
|
||
},
|
||
"source": [
|
||
"3️⃣ We're now ready to push our trained agent to the 🤗 Hub 🔥 using `package_to_hub()` function"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "UNwkTS65Uq3Q"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"repo_id = \"\" #TODO Define your repo id {username/Reinforce-{model-id}}\n",
|
||
"package_to_hub(repo_id,\n",
|
||
" cartpole_policy, # The model we want to save\n",
|
||
" cartpole_hyperparameters, # Hyperparameters\n",
|
||
" eval_env, # Evaluation environment\n",
|
||
" video_fps=30,\n",
|
||
" local_repo_path=\"hub\",\n",
|
||
" commit_message=\"Push Reinforce agent to the Hub\",\n",
|
||
" )"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "jrnuKH1gYZSz"
|
||
},
|
||
"source": [
|
||
"Now that we try the robustness of our implementation, let's try with more complex environments such as:\n",
|
||
"- Pixelcopter\n",
|
||
"- Pong\n",
|
||
"\n",
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "NNWvlyvzalXr"
|
||
},
|
||
"source": [
|
||
"## Second agent: PixelCopter 🚁\n",
|
||
"\n",
|
||
"### Step 1: Study the PixelCopter environment 👀\n",
|
||
"- [The Environment documentation](https://pygame-learning-environment.readthedocs.io/en/latest/user/games/pixelcopter.html)\n",
|
||
"\n",
|
||
"The observation space (7) 👀:\n",
|
||
"- player y position\n",
|
||
"- player velocity\n",
|
||
"- player distance to floor\n",
|
||
"- player distance to ceiling\n",
|
||
"- next block x distance to player\n",
|
||
"- next blocks top y location\n",
|
||
"- next blocks bottom y location\n",
|
||
"\n",
|
||
"The action space(2) 🎮:\n",
|
||
"- Up\n",
|
||
"- Down\n",
|
||
"\n",
|
||
"The reward function 💰: \n",
|
||
"- For each vertical block it passes through it gains a positive reward of +1. Each time a terminal state reached it receives a negative reward of -1."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "JBSc8mlfyin3"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"env_id = \"Pixelcopter-PLE-v0\"\n",
|
||
"env = gym.make(env_id)\n",
|
||
"eval_env = gym.make(env_id)\n",
|
||
"s_size = env.observation_space.shape[0]\n",
|
||
"a_size = env.action_space.n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "SM1QiGCSbBkM"
|
||
},
|
||
"source": [
|
||
"### Step 2: Define the hyperparameters ⚙️\n",
|
||
"- Because this environment is more complex, we need to change the hyperparameters\n",
|
||
"- Especially the hidden size, we need more neurons."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "y0uujOR_ypB6"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"pixelcopter_hyperparameters = {\n",
|
||
" \"h_size\": 64,\n",
|
||
" \"n_training_episodes\": 50000,\n",
|
||
" \"n_evaluation_episodes\": 10,\n",
|
||
" \"max_t\": 10000,\n",
|
||
" \"gamma\": 0.99,\n",
|
||
" \"lr\": 1e-4,\n",
|
||
" \"env_id\": env_id,\n",
|
||
" \"state_space\": s_size,\n",
|
||
" \"action_space\": a_size,\n",
|
||
"}"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "I1eBkCiX2X_S"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"class Policy(nn.Module):\n",
|
||
" def __init__(self, s_size, a_size, h_size):\n",
|
||
" super(Policy, self).__init__()\n",
|
||
" self.fc1 = nn.Linear(s_size, h_size)\n",
|
||
" self.fc2 = nn.Linear(h_size, a_size)\n",
|
||
"\n",
|
||
" def forward(self, x):\n",
|
||
" x = F.relu(self.fc1(x))\n",
|
||
" x = self.fc2(x)\n",
|
||
" return F.softmax(x, dim=1)\n",
|
||
" \n",
|
||
" def act(self, state):\n",
|
||
" state = torch.from_numpy(state).float().unsqueeze(0).to(device)\n",
|
||
" probs = self.forward(state).cpu()\n",
|
||
" m = Categorical(probs)\n",
|
||
" action = m.sample()\n",
|
||
" return action.item(), m.log_prob(action)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "7mM2P_ckysFE"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"# Create policy and place it to the device\n",
|
||
"# torch.manual_seed(50)\n",
|
||
"pixelcopter_policy = Policy(pixelcopter_hyperparameters[\"state_space\"], pixelcopter_hyperparameters[\"action_space\"], pixelcopter_hyperparameters[\"h_size\"]).to(device)\n",
|
||
"pixelcopter_optimizer = optim.Adam(pixelcopter_policy.parameters(), lr=pixelcopter_hyperparameters[\"lr\"])"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "v1HEqP-fy-Rf"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"scores = reinforce(pixelcopter_policy,\n",
|
||
" pixelcopter_optimizer,\n",
|
||
" pixelcopter_hyperparameters[\"n_training_episodes\"], \n",
|
||
" pixelcopter_hyperparameters[\"max_t\"],\n",
|
||
" pixelcopter_hyperparameters[\"gamma\"], \n",
|
||
" 1000)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "PEl5qZHI1Mnv"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"repo_id = \"\" #TODO Define your repo id {username/Reinforce-{model-id}}\n",
|
||
"package_to_hub(repo_id, \n",
|
||
" pixelcopter_policy,\n",
|
||
" pixelcopter_hyperparameters,\n",
|
||
" eval_env,\n",
|
||
" video_fps=30,\n",
|
||
" local_repo_path=\"hub\",\n",
|
||
" commit_message=\"Push Reinforce agent to the Hub\",\n",
|
||
" )"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "tPQTluwSUdq8"
|
||
},
|
||
"source": [
|
||
"## Third agent: Pong 🎾\n",
|
||
"\n",
|
||
"### Step 1: Study the Pong environment 👀\n",
|
||
"- [The Environment documentation](https://pygame-learning-environment.readthedocs.io/en/latest/user/games/pong.html)\n",
|
||
"\n",
|
||
"The observation space (7) 👀:\n",
|
||
"- player y position\n",
|
||
"- players velocity\n",
|
||
"- cpu y position\n",
|
||
"- ball x position\n",
|
||
"- ball y position\n",
|
||
"- ball x velocity\n",
|
||
"- ball y velocity\n",
|
||
"\n",
|
||
"The action space(3) 🎮:\n",
|
||
"- Paddle Up \n",
|
||
"- Paddle Down \n",
|
||
"- No movement\n",
|
||
"\n",
|
||
"The reward function 💰: \n",
|
||
"- The agent receives a positive reward, of +1, for each successful ball placed behind the opponents paddle, while it loses a point, -1, if the ball goes behind its paddle."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "iqD7TAfzwcQ3"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"env_id = \"Pong-PLE-v0\"\n",
|
||
"env = gym.make(env_id)\n",
|
||
"eval_env = gym.make(env_id)\n",
|
||
"s_size = env.observation_space.shape[0]\n",
|
||
"a_size = env.action_space.n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "AR8ZALoWZkcp"
|
||
},
|
||
"source": [
|
||
"### Step 2: Define the hyperparameters ⚙️ and improve the Policy\n",
|
||
"- Because this environment is more complex, we need to change the hyperparameters and create a deeper neural network.\n",
|
||
"- Especially the hidden size, we need more neurons."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "5Hdp7_OubOGA"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"class Policy(nn.Module):\n",
|
||
" def __init__(self, s_size, a_size, h_size):\n",
|
||
" super(Policy, self).__init__()\n",
|
||
" self.fc1 = nn.Linear(s_size, h_size)\n",
|
||
" self.fc2 = nn.Linear(h_size, h_size*2)\n",
|
||
" self.fc3 = nn.Linear(h_size*2, a_size)\n",
|
||
"\n",
|
||
" def forward(self, x):\n",
|
||
" x = F.relu(self.fc1(x))\n",
|
||
" x = F.relu(self.fc2(x))\n",
|
||
" x = self.fc3(x)\n",
|
||
" return F.softmax(x, dim=1)\n",
|
||
" \n",
|
||
" def act(self, state):\n",
|
||
" state = torch.from_numpy(state).float().unsqueeze(0).to(device)\n",
|
||
" probs = self.forward(state).cpu()\n",
|
||
" m = Categorical(probs)\n",
|
||
" action = m.sample()\n",
|
||
" return action.item(), m.log_prob(action)\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "lYFe9l4Jw9r_"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"pong_hyperparameters = {\n",
|
||
" \"h_size\": 64,\n",
|
||
" \"n_training_episodes\": 20000,\n",
|
||
" \"n_evaluation_episodes\": 10,\n",
|
||
" \"max_t\": 5000,\n",
|
||
" \"gamma\": 0.99,\n",
|
||
" \"lr\": 1e-2,\n",
|
||
" \"env_id\": env_id,\n",
|
||
" \"state_space\": s_size,\n",
|
||
" \"action_space\": a_size,\n",
|
||
"}"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "NuLpSVDMaMny"
|
||
},
|
||
"source": [
|
||
"### Step 3: Train the agent 🏃"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "Bv2BMsB-xQFJ"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"# Create policy and place it to the device\n",
|
||
"# torch.manual_seed(50)\n",
|
||
"pong_policy = Policy(pong_hyperparameters[\"state_space\"], pong_hyperparameters[\"action_space\"], pong_hyperparameters[\"h_size\"]).to(device)\n",
|
||
"pong_optimizer = optim.Adam(pong_policy.parameters(), lr=pong_hyperparameters[\"lr\"])"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "Y_HiGtL6xe8s"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"scores = reinforce(pong_policy,\n",
|
||
" pong_optimizer,\n",
|
||
" pong_hyperparameters[\"n_training_episodes\"], \n",
|
||
" pong_hyperparameters[\"max_t\"],\n",
|
||
" pong_hyperparameters[\"gamma\"], \n",
|
||
" 1000)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "W0v-C0JLaLKi"
|
||
},
|
||
"source": [
|
||
"### Step 4: Publish our trained model on the Hub 🔥"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "yA6UtSS0aV3u"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"repo_id = \"\" #TODO Define your repo id {username/Reinforce-{model-id}}\n",
|
||
"package_to_hub(repo_id,\n",
|
||
" pong_policy, # The model we want to save\n",
|
||
" pong_hyperparameters, # Hyperparameters\n",
|
||
" eval_env, # Evaluation environment\n",
|
||
" video_fps=30,\n",
|
||
" local_repo_path=\"hub\",\n",
|
||
" commit_message=\"Push Reinforce agent to the Hub\",\n",
|
||
" )"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "7VDcJ29FcOyb"
|
||
},
|
||
"source": [
|
||
"## Some additional challenges 🏆\n",
|
||
"The best way to learn **is to try things by your own**! As you saw, the current agent is not doing great. As a first suggestion, you can train for more steps. But also trying to find better parameters.\n",
|
||
"\n",
|
||
"In the [Leaderboard](https://huggingface.co/spaces/chrisjay/Deep-Reinforcement-Learning-Leaderboard) you will find your agents. Can you get to the top?\n",
|
||
"\n",
|
||
"Here are some ideas to achieve so:\n",
|
||
"* Train more steps\n",
|
||
"* Try different hyperparameters by looking at what your classmates have done 👉 https://huggingface.co/models?other=reinforce\n",
|
||
"* **Push your new trained model** on the Hub 🔥\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "x62pP0PHdA-y"
|
||
},
|
||
"source": [
|
||
"________________________________________________________________________\n",
|
||
"Congrats on finishing this chapter! That was the biggest one, **and there was a lot of information.**\n",
|
||
"\n",
|
||
"If you’re still feel confused with all these elements...it's totally normal! **This was the same for me and for all people who studied RL.**\n",
|
||
"\n",
|
||
"Take time to really **grasp the material before continuing and try the additional challenges**. It’s important to master these elements and having a solid foundations.\n",
|
||
"\n",
|
||
"Naturally, during the course, we’re going to use and deeper explain again these terms but **it’s better to have a good understanding of them now before diving into the next chapters.**\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "NbJVOX3QdCLg"
|
||
},
|
||
"source": [
|
||
"### This is a course built with you 👷🏿♀️\n",
|
||
"\n",
|
||
"We want to improve and update the course iteratively with your feedback. If you have some, please open an issue on the Github Repo: [https://github.com/huggingface/deep-rl-class/issues](https://github.com/huggingface/deep-rl-class/issues)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "-fI1AGPSdGzo"
|
||
},
|
||
"source": [
|
||
"See you on Unit 6! 🔥\n",
|
||
"## Keep learning, stay awesome!"
|
||
]
|
||
}
|
||
],
|
||
"metadata": {
|
||
"accelerator": "GPU",
|
||
"colab": {
|
||
"authorship_tag": "ABX9TyPoxzAc/iqqWPY8sm+LcHBH",
|
||
"collapsed_sections": [],
|
||
"include_colab_link": true,
|
||
"name": "unit5",
|
||
"private_outputs": true,
|
||
"provenance": []
|
||
},
|
||
"gpuClass": "standard",
|
||
"kernelspec": {
|
||
"display_name": "Python 3",
|
||
"name": "python3"
|
||
},
|
||
"language_info": {
|
||
"name": "python"
|
||
}
|
||
},
|
||
"nbformat": 4,
|
||
"nbformat_minor": 0
|
||
}
|