mirror of
https://github.com/huggingface/deep-rl-class.git
synced 2026-02-13 07:05:04 +08:00
1325 lines
592 KiB
Plaintext
1325 lines
592 KiB
Plaintext
{
|
||
"cells": [
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"colab_type": "text",
|
||
"id": "view-in-github"
|
||
},
|
||
"source": [
|
||
"<a href=\"https://colab.research.google.com/github/huggingface/deep-rl-class/blob/main/unit8/unit8.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "-cf5-oDPjwf8"
|
||
},
|
||
"source": [
|
||
"# Unit 8: Proximal Policy Gradient (PPO) with PyTorch 🤖\n",
|
||
"\n",
|
||
"In this unit, you'll learn to **code your PPO agent from scratch with PyTorch**.\n",
|
||
"\n",
|
||
"To test its robustness, we're going to train it in 2 different classical environments:\n",
|
||
"\n",
|
||
"- [Cartpole-v1](https://www.gymlibrary.dev/environments/classic_control/cart_pole/?highlight=cartpole)\n",
|
||
"- [LunarLander-v2 🚀](https://www.gymlibrary.dev/environments/box2d/lunar_lander/)\n",
|
||
"\n",
|
||
"We finish this foundations part of the course by **understanding how PPO works deeply.** In the first Unit, you learned to train a PPO agent on LunarLander-v2. And now, with Unit 8, you can code it from scratch. **How incredible is that 🤩.**\n",
|
||
"\n",
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "2Fl6Rxt0lc0O"
|
||
},
|
||
"source": [
|
||
"⬇️ Here is an example of what you will achieve. ⬇️"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "PYv9Br6slgR-"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"%%html\n",
|
||
"<video controls autoplay><source src=\"https://huggingface.co/sb3/ppo-CartPole-v1/resolve/main/replay.mp4\" type=\"video/mp4\"></video>"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "DbKfCj5ilgqT"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"%%html\n",
|
||
"<video controls autoplay><source src=\"https://huggingface.co/sb3/ppo-LunarLander-v2/resolve/main/replay.mp4\" type=\"video/mp4\"></video>"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "YcOFdWpnlxNf"
|
||
},
|
||
"source": [
|
||
"💡 We advise you to use Google Colab since some environments work only with Ubuntu. The free version of Google Colab is perfect for this tutorial. Let's get started 🚀"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "gkLf2KjRl8cm"
|
||
},
|
||
"source": [
|
||
"## This notebook is from Deep Reinforcement Learning Class\n",
|
||
"\n",
|
||
""
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "kRZVF-uOl8_n"
|
||
},
|
||
"source": [
|
||
"In this free course, you will:\n",
|
||
"\n",
|
||
"- 📖 Study Deep Reinforcement Learning in **theory and practice**.\n",
|
||
"- 🧑💻 Learn to **use famous Deep RL libraries** such as Stable Baselines3, RL Baselines3 Zoo, and RLlib.\n",
|
||
"- 🤖 Train **agents in unique environments** \n",
|
||
"\n",
|
||
"And more check 📚 the syllabus 👉 https://github.com/huggingface/deep-rl-class\n",
|
||
"\n",
|
||
"The best way to keep in touch is to join our discord server to exchange with the community and with us 👉🏻 https://discord.gg/aYka4Yhff9"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "AMuTi9xgl_tS"
|
||
},
|
||
"source": [
|
||
"## Prerequisites 🏗️\n",
|
||
"Before diving into the notebook, you need to:\n",
|
||
"\n",
|
||
"🔲 📚 [Read the Unit 8 Readme](https://github.com/huggingface/deep-rl-class/blob/main/unit8/README.md) contains all the information.\n",
|
||
"\n",
|
||
"🔲 📚 **Study Proximal Policy Optimization (PPO)** by reading the chapter 👉 https://huggingface.co/blog/deep-rl-ppo"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "dQKcncVkmFtE"
|
||
},
|
||
"source": [
|
||
"### Step 0: Set the GPU 💪\n",
|
||
"- To **faster the agent's training, we'll use a GPU** to do that go to `Runtime > Change Runtime type`\n",
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "7Ov87REzmHkU"
|
||
},
|
||
"source": [
|
||
"- `Hardware Accelerator > GPU`"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "tjcQVcC-mK3p"
|
||
},
|
||
"source": [
|
||
""
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "ncIgfNf3mOtc"
|
||
},
|
||
"source": [
|
||
"### Step 1: Install dependencies 🔽 and virtual screen 💻\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "ixxlRGmhM6gY"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"!pip install imageio-ffmpeg\n",
|
||
"!pip install huggingface_hub\n",
|
||
"!pip install gym[box2d]"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "sQ2_tjvxM6mP"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"!apt install python-opengl\n",
|
||
"!apt install ffmpeg\n",
|
||
"!apt install xvfb\n",
|
||
"!pip3 install pyvirtualdisplay\n",
|
||
"\n",
|
||
"# Virtual display\n",
|
||
"from pyvirtualdisplay import Display\n",
|
||
"\n",
|
||
"virtual_display = Display(visible=0, size=(500, 500))\n",
|
||
"virtual_display.start()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "oDkUufewmq6v"
|
||
},
|
||
"source": [
|
||
"### Step 2: Let's code PPO from scratch with Costa Huang tutorial\n",
|
||
"- For the core implementation of PPO we're going to use the excellent [Costa Huang](https://costa.sh/) tutorial.\n",
|
||
"- In addition to the tutorial, to go deeper you can read the 13 core implementation details: https://iclr-blog-track.github.io/2022/03/25/ppo-implementation-details/\n",
|
||
"\n",
|
||
"👉 The video tutorial: https://youtu.be/MEt6rrxH8W4"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "aNgEL1_uvhaq"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"from IPython.display import HTML\n",
|
||
"\n",
|
||
"HTML('<iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/MEt6rrxH8W4\" title=\"YouTube video player\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen></iframe>')"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "f34ILn7AvTbt"
|
||
},
|
||
"source": [
|
||
"- The best is to code first on the cell below, this way, if you kill the machine **you don't loose the implementation**."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "_bE708C6mhE7"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"### Your code here:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "mk-a9CmNuS2W"
|
||
},
|
||
"source": [
|
||
"### Step 3: Add the Hugging Face Integration 🤗\n",
|
||
"- In order to push our model to the Hub, we need to define a function `package_to_hub`"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "TPi1Nme-oGWd"
|
||
},
|
||
"source": [
|
||
"- Add dependencies we need to push our model to the Hub"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "Sj8bz-AmoNVj"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"from huggingface_hub import HfApi, upload_folder\n",
|
||
"from huggingface_hub.repocard import metadata_eval_result, metadata_save\n",
|
||
"\n",
|
||
"from pathlib import Path\n",
|
||
"import datetime\n",
|
||
"import tempfile\n",
|
||
"import json\n",
|
||
"import shutil\n",
|
||
"import imageio\n",
|
||
"\n",
|
||
"from wasabi import Printer\n",
|
||
"msg = Printer()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "5rDr8-lWn0zi"
|
||
},
|
||
"source": [
|
||
"- Add new argument in `parse_args()` function to define the repo-id where we want to push the model."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "iHQiqQEFn0QH"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"# Adding HuggingFace argument\n",
|
||
"parser.add_argument(\"--repo-id\", type=str, default=\"ThomasSimonini/ppo-CartPole-v1\", help=\"id of the model repository from the Hugging Face Hub {username/repo_name}\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "blLZMiBAoUVT"
|
||
},
|
||
"source": [
|
||
"- Next, we add the methods needed to push the model to the Hub\n",
|
||
"\n",
|
||
"- These methods will:\n",
|
||
" - `_evalutate_agent()`: evaluate the agent.\n",
|
||
" - `_generate_model_card()`: generate the model card of your agent.\n",
|
||
" - `_record_video()`: record a video of your agent."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "WlLcz4L9odXs"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"def package_to_hub(repo_id, \n",
|
||
" model,\n",
|
||
" hyperparameters,\n",
|
||
" eval_env,\n",
|
||
" video_fps=30,\n",
|
||
" commit_message=\"Push Reinforce agent to the Hub\",\n",
|
||
" token= None,\n",
|
||
" logs=None\n",
|
||
" ):\n",
|
||
" \"\"\"\n",
|
||
" Evaluate, Generate a video and Upload a model to Hugging Face Hub.\n",
|
||
" This method does the complete pipeline:\n",
|
||
" - It evaluates the model\n",
|
||
" - It generates the model card\n",
|
||
" - It generates a replay video of the agent\n",
|
||
" - It pushes everything to the hub\n",
|
||
" :param repo_id: id of the model repository from the Hugging Face Hub\n",
|
||
" :param model: trained model\n",
|
||
" :param eval_env: environment used to evaluate the agent\n",
|
||
" :param fps: number of fps for rendering the video\n",
|
||
" :param commit_message: commit message\n",
|
||
" :param logs: directory on local machine of tensorboard logs you'd like to upload\n",
|
||
" \"\"\"\n",
|
||
" msg.info(\n",
|
||
" \"This function will save, evaluate, generate a video of your agent, \"\n",
|
||
" \"create a model card and push everything to the hub. \"\n",
|
||
" \"It might take up to 1min. \\n \"\n",
|
||
" \"This is a work in progress: if you encounter a bug, please open an issue.\"\n",
|
||
" )\n",
|
||
" # Step 1: Clone or create the repo\n",
|
||
" repo_url = HfApi().create_repo(\n",
|
||
" repo_id=repo_id,\n",
|
||
" token=token,\n",
|
||
" private=False,\n",
|
||
" exist_ok=True,\n",
|
||
" )\n",
|
||
" \n",
|
||
" with tempfile.TemporaryDirectory() as tmpdirname:\n",
|
||
" tmpdirname = Path(tmpdirname)\n",
|
||
"\n",
|
||
" # Step 2: Save the model\n",
|
||
" torch.save(model.state_dict(), tmpdirname / \"model.pt\")\n",
|
||
" \n",
|
||
" # Step 3: Evaluate the model and build JSON\n",
|
||
" mean_reward, std_reward = _evaluate_agent(eval_env, \n",
|
||
" 10, \n",
|
||
" model)\n",
|
||
"\n",
|
||
" # First get datetime\n",
|
||
" eval_datetime = datetime.datetime.now()\n",
|
||
" eval_form_datetime = eval_datetime.isoformat()\n",
|
||
"\n",
|
||
" evaluate_data = {\n",
|
||
" \"env_id\": hyperparameters.env_id, \n",
|
||
" \"mean_reward\": mean_reward,\n",
|
||
" \"std_reward\": std_reward,\n",
|
||
" \"n_evaluation_episodes\": 10,\n",
|
||
" \"eval_datetime\": eval_form_datetime,\n",
|
||
" }\n",
|
||
" \n",
|
||
" # Write a JSON file\n",
|
||
" with open(tmpdirname / \"results.json\", \"w\") as outfile:\n",
|
||
" json.dump(evaluate_data, outfile)\n",
|
||
"\n",
|
||
" # Step 4: Generate a video\n",
|
||
" video_path = tmpdirname / \"replay.mp4\"\n",
|
||
" record_video(eval_env, model, video_path, video_fps)\n",
|
||
" \n",
|
||
" # Step 5: Generate the model card\n",
|
||
" generated_model_card, metadata = _generate_model_card(\"PPO\", hyperparameters.env_id, mean_reward, std_reward, hyperparameters)\n",
|
||
" _save_model_card(tmpdirname, generated_model_card, metadata)\n",
|
||
"\n",
|
||
" # Step 6: Add logs if needed\n",
|
||
" if logs:\n",
|
||
" _add_logdir(tmpdirname, Path(logs))\n",
|
||
" \n",
|
||
" msg.info(f\"Pushing repo {repo_id} to the Hugging Face Hub\")\n",
|
||
" \n",
|
||
" repo_url = upload_folder(\n",
|
||
" repo_id=repo_id,\n",
|
||
" folder_path=tmpdirname,\n",
|
||
" path_in_repo=\"\",\n",
|
||
" commit_message=commit_message,\n",
|
||
" token=token,\n",
|
||
" )\n",
|
||
"\n",
|
||
" msg.info(f\"Your model is pushed to the Hub. You can view your model here: {repo_url}\")\n",
|
||
" return repo_url\n",
|
||
"\n",
|
||
"def _evaluate_agent(env, n_eval_episodes, policy):\n",
|
||
" \"\"\"\n",
|
||
" Evaluate the agent for ``n_eval_episodes`` episodes and returns average reward and std of reward.\n",
|
||
" :param env: The evaluation environment\n",
|
||
" :param n_eval_episodes: Number of episode to evaluate the agent\n",
|
||
" :param policy: The Reinforce agent\n",
|
||
" \"\"\"\n",
|
||
" episode_rewards = []\n",
|
||
" for episode in range(n_eval_episodes):\n",
|
||
" state = env.reset()\n",
|
||
" step = 0\n",
|
||
" done = False\n",
|
||
" total_rewards_ep = 0\n",
|
||
" \n",
|
||
" while done is False:\n",
|
||
" state = torch.Tensor(state)\n",
|
||
" action, _, _, _ = policy.get_action_and_value(state)\n",
|
||
" new_state, reward, done, info = env.step(action.numpy())\n",
|
||
" total_rewards_ep += reward \n",
|
||
" if done:\n",
|
||
" break\n",
|
||
" state = new_state\n",
|
||
" episode_rewards.append(total_rewards_ep)\n",
|
||
" mean_reward = np.mean(episode_rewards)\n",
|
||
" std_reward = np.std(episode_rewards)\n",
|
||
"\n",
|
||
" return mean_reward, std_reward\n",
|
||
"\n",
|
||
"def record_video(env, policy, out_directory, fps=30):\n",
|
||
" images = [] \n",
|
||
" done = False\n",
|
||
" state = env.reset()\n",
|
||
" img = env.render(mode='rgb_array')\n",
|
||
" images.append(img)\n",
|
||
" while not done:\n",
|
||
" state = torch.Tensor(state)\n",
|
||
" # Take the action (index) that have the maximum expected future reward given that state\n",
|
||
" action, _, _, _ = policy.get_action_and_value(state)\n",
|
||
" state, reward, done, info = env.step(action.numpy()) # We directly put next_state = state for recording logic\n",
|
||
" img = env.render(mode='rgb_array')\n",
|
||
" images.append(img)\n",
|
||
" imageio.mimsave(out_directory, [np.array(img) for i, img in enumerate(images)], fps=fps)\n",
|
||
"\n",
|
||
"def _generate_model_card(model_name, env_id, mean_reward, std_reward, hyperparameters):\n",
|
||
" \"\"\"\n",
|
||
" Generate the model card for the Hub\n",
|
||
" :param model_name: name of the model\n",
|
||
" :env_id: name of the environment\n",
|
||
" :mean_reward: mean reward of the agent\n",
|
||
" :std_reward: standard deviation of the mean reward of the agent\n",
|
||
" :hyperparameters: training arguments\n",
|
||
" \"\"\"\n",
|
||
" # Step 1: Select the tags\n",
|
||
" metadata = generate_metadata(model_name, env_id, mean_reward, std_reward)\n",
|
||
"\n",
|
||
" # Transform the hyperparams namespace to string\n",
|
||
" converted_dict = vars(hyperparameters)\n",
|
||
" converted_str = str(converted_dict)\n",
|
||
" converted_str = converted_str.split(\", \")\n",
|
||
" converted_str = '\\n'.join(converted_str)\n",
|
||
" \n",
|
||
" # Step 2: Generate the model card\n",
|
||
" model_card = f\"\"\"\n",
|
||
" # PPO Agent Playing {env_id}\n",
|
||
"\n",
|
||
" This is a trained model of a PPO agent playing {env_id}.\n",
|
||
" To learn to code your own PPO agent and train it Unit 8 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit8\n",
|
||
" \n",
|
||
" # Hyperparameters\n",
|
||
" ```python\n",
|
||
" {converted_str}\n",
|
||
" ```\n",
|
||
" \"\"\"\n",
|
||
" return model_card, metadata\n",
|
||
"\n",
|
||
"def generate_metadata(model_name, env_id, mean_reward, std_reward):\n",
|
||
" \"\"\"\n",
|
||
" Define the tags for the model card\n",
|
||
" :param model_name: name of the model\n",
|
||
" :param env_id: name of the environment\n",
|
||
" :mean_reward: mean reward of the agent\n",
|
||
" :std_reward: standard deviation of the mean reward of the agent\n",
|
||
" \"\"\"\n",
|
||
" metadata = {}\n",
|
||
" metadata[\"tags\"] = [\n",
|
||
" env_id,\n",
|
||
" \"ppo\",\n",
|
||
" \"deep-reinforcement-learning\",\n",
|
||
" \"reinforcement-learning\",\n",
|
||
" \"custom-implementation\",\n",
|
||
" \"deep-rl-class\"\n",
|
||
" ]\n",
|
||
"\n",
|
||
" # Add metrics\n",
|
||
" eval = metadata_eval_result(\n",
|
||
" model_pretty_name=model_name,\n",
|
||
" task_pretty_name=\"reinforcement-learning\",\n",
|
||
" task_id=\"reinforcement-learning\",\n",
|
||
" metrics_pretty_name=\"mean_reward\",\n",
|
||
" metrics_id=\"mean_reward\",\n",
|
||
" metrics_value=f\"{mean_reward:.2f} +/- {std_reward:.2f}\",\n",
|
||
" dataset_pretty_name=env_id,\n",
|
||
" dataset_id=env_id,\n",
|
||
" )\n",
|
||
"\n",
|
||
" # Merges both dictionaries\n",
|
||
" metadata = {**metadata, **eval}\n",
|
||
"\n",
|
||
" return metadata\n",
|
||
"\n",
|
||
"def _save_model_card(local_path, generated_model_card, metadata):\n",
|
||
" \"\"\"Saves a model card for the repository.\n",
|
||
" :param local_path: repository directory\n",
|
||
" :param generated_model_card: model card generated by _generate_model_card()\n",
|
||
" :param metadata: metadata\n",
|
||
" \"\"\"\n",
|
||
" readme_path = local_path / \"README.md\"\n",
|
||
" readme = \"\"\n",
|
||
" if readme_path.exists():\n",
|
||
" with readme_path.open(\"r\", encoding=\"utf8\") as f:\n",
|
||
" readme = f.read()\n",
|
||
" else:\n",
|
||
" readme = generated_model_card\n",
|
||
"\n",
|
||
" with readme_path.open(\"w\", encoding=\"utf-8\") as f:\n",
|
||
" f.write(readme)\n",
|
||
"\n",
|
||
" # Save our metrics to Readme metadata\n",
|
||
" metadata_save(readme_path, metadata)\n",
|
||
"\n",
|
||
"def _add_logdir(local_path: Path, logdir: Path):\n",
|
||
" \"\"\"Adds a logdir to the repository.\n",
|
||
" :param local_path: repository directory\n",
|
||
" :param logdir: logdir directory\n",
|
||
" \"\"\"\n",
|
||
" if logdir.exists() and logdir.is_dir():\n",
|
||
" # Add the logdir to the repository under new dir called logs\n",
|
||
" repo_logdir = local_path / \"logs\"\n",
|
||
" \n",
|
||
" # Delete current logs if they exist\n",
|
||
" if repo_logdir.exists():\n",
|
||
" shutil.rmtree(repo_logdir)\n",
|
||
"\n",
|
||
" # Copy logdir into repo logdir\n",
|
||
" shutil.copytree(logdir, repo_logdir)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "TqX8z8_rooD6"
|
||
},
|
||
"source": [
|
||
"- Finally, we call this function at the end of the PPO training"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "I8V1vNiTo2hL"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"# Create the evaluation environment\n",
|
||
"eval_env = gym.make(args.env_id)\n",
|
||
"\n",
|
||
"package_to_hub(repo_id = args.repo_id,\n",
|
||
" model = agent, # The model we want to save\n",
|
||
" hyperparameters = args,\n",
|
||
" eval_env = gym.make(args.env_id),\n",
|
||
" logs= f\"runs/{run_name}\",\n",
|
||
" )"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "muCCzed4o5TC"
|
||
},
|
||
"source": [
|
||
"- Here's what look the ppo.py final file"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "LviRdtXgo7kF"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"# docs and experiment results can be found at https://docs.cleanrl.dev/rl-algorithms/ppo/#ppopy\n",
|
||
"\n",
|
||
"import argparse\n",
|
||
"import os\n",
|
||
"import random\n",
|
||
"import time\n",
|
||
"from distutils.util import strtobool\n",
|
||
"\n",
|
||
"import gym\n",
|
||
"import numpy as np\n",
|
||
"import torch\n",
|
||
"import torch.nn as nn\n",
|
||
"import torch.optim as optim\n",
|
||
"from torch.distributions.categorical import Categorical\n",
|
||
"from torch.utils.tensorboard import SummaryWriter\n",
|
||
"\n",
|
||
"from huggingface_hub import HfApi, upload_folder\n",
|
||
"from huggingface_hub.repocard import metadata_eval_result, metadata_save\n",
|
||
"\n",
|
||
"from pathlib import Path\n",
|
||
"import datetime\n",
|
||
"import tempfile\n",
|
||
"import json\n",
|
||
"import shutil\n",
|
||
"import imageio\n",
|
||
"\n",
|
||
"from wasabi import Printer\n",
|
||
"msg = Printer()\n",
|
||
"\n",
|
||
"def parse_args():\n",
|
||
" # fmt: off\n",
|
||
" parser = argparse.ArgumentParser()\n",
|
||
" parser.add_argument(\"--exp-name\", type=str, default=os.path.basename(__file__).rstrip(\".py\"),\n",
|
||
" help=\"the name of this experiment\")\n",
|
||
" parser.add_argument(\"--seed\", type=int, default=1,\n",
|
||
" help=\"seed of the experiment\")\n",
|
||
" parser.add_argument(\"--torch-deterministic\", type=lambda x: bool(strtobool(x)), default=True, nargs=\"?\", const=True,\n",
|
||
" help=\"if toggled, `torch.backends.cudnn.deterministic=False`\")\n",
|
||
" parser.add_argument(\"--cuda\", type=lambda x: bool(strtobool(x)), default=True, nargs=\"?\", const=True,\n",
|
||
" help=\"if toggled, cuda will be enabled by default\")\n",
|
||
" parser.add_argument(\"--track\", type=lambda x: bool(strtobool(x)), default=False, nargs=\"?\", const=True,\n",
|
||
" help=\"if toggled, this experiment will be tracked with Weights and Biases\")\n",
|
||
" parser.add_argument(\"--wandb-project-name\", type=str, default=\"cleanRL\",\n",
|
||
" help=\"the wandb's project name\")\n",
|
||
" parser.add_argument(\"--wandb-entity\", type=str, default=None,\n",
|
||
" help=\"the entity (team) of wandb's project\")\n",
|
||
" parser.add_argument(\"--capture-video\", type=lambda x: bool(strtobool(x)), default=False, nargs=\"?\", const=True,\n",
|
||
" help=\"weather to capture videos of the agent performances (check out `videos` folder)\")\n",
|
||
"\n",
|
||
" # Algorithm specific arguments\n",
|
||
" parser.add_argument(\"--env-id\", type=str, default=\"CartPole-v1\",\n",
|
||
" help=\"the id of the environment\")\n",
|
||
" parser.add_argument(\"--total-timesteps\", type=int, default=50000,\n",
|
||
" help=\"total timesteps of the experiments\")\n",
|
||
" parser.add_argument(\"--learning-rate\", type=float, default=2.5e-4,\n",
|
||
" help=\"the learning rate of the optimizer\")\n",
|
||
" parser.add_argument(\"--num-envs\", type=int, default=4,\n",
|
||
" help=\"the number of parallel game environments\")\n",
|
||
" parser.add_argument(\"--num-steps\", type=int, default=128,\n",
|
||
" help=\"the number of steps to run in each environment per policy rollout\")\n",
|
||
" parser.add_argument(\"--anneal-lr\", type=lambda x: bool(strtobool(x)), default=True, nargs=\"?\", const=True,\n",
|
||
" help=\"Toggle learning rate annealing for policy and value networks\")\n",
|
||
" parser.add_argument(\"--gae\", type=lambda x: bool(strtobool(x)), default=True, nargs=\"?\", const=True,\n",
|
||
" help=\"Use GAE for advantage computation\")\n",
|
||
" parser.add_argument(\"--gamma\", type=float, default=0.99,\n",
|
||
" help=\"the discount factor gamma\")\n",
|
||
" parser.add_argument(\"--gae-lambda\", type=float, default=0.95,\n",
|
||
" help=\"the lambda for the general advantage estimation\")\n",
|
||
" parser.add_argument(\"--num-minibatches\", type=int, default=4,\n",
|
||
" help=\"the number of mini-batches\")\n",
|
||
" parser.add_argument(\"--update-epochs\", type=int, default=4,\n",
|
||
" help=\"the K epochs to update the policy\")\n",
|
||
" parser.add_argument(\"--norm-adv\", type=lambda x: bool(strtobool(x)), default=True, nargs=\"?\", const=True,\n",
|
||
" help=\"Toggles advantages normalization\")\n",
|
||
" parser.add_argument(\"--clip-coef\", type=float, default=0.2,\n",
|
||
" help=\"the surrogate clipping coefficient\")\n",
|
||
" parser.add_argument(\"--clip-vloss\", type=lambda x: bool(strtobool(x)), default=True, nargs=\"?\", const=True,\n",
|
||
" help=\"Toggles whether or not to use a clipped loss for the value function, as per the paper.\")\n",
|
||
" parser.add_argument(\"--ent-coef\", type=float, default=0.01,\n",
|
||
" help=\"coefficient of the entropy\")\n",
|
||
" parser.add_argument(\"--vf-coef\", type=float, default=0.5,\n",
|
||
" help=\"coefficient of the value function\")\n",
|
||
" parser.add_argument(\"--max-grad-norm\", type=float, default=0.5,\n",
|
||
" help=\"the maximum norm for the gradient clipping\")\n",
|
||
" parser.add_argument(\"--target-kl\", type=float, default=None,\n",
|
||
" help=\"the target KL divergence threshold\")\n",
|
||
" \n",
|
||
" # Adding HuggingFace argument\n",
|
||
" parser.add_argument(\"--repo-id\", type=str, default=\"ThomasSimonini/ppo-CartPole-v1\", help=\"id of the model repository from the Hugging Face Hub {username/repo_name}\")\n",
|
||
"\n",
|
||
" args = parser.parse_args()\n",
|
||
" args.batch_size = int(args.num_envs * args.num_steps)\n",
|
||
" args.minibatch_size = int(args.batch_size // args.num_minibatches)\n",
|
||
" # fmt: on\n",
|
||
" return args\n",
|
||
"\n",
|
||
"def package_to_hub(repo_id, \n",
|
||
" model,\n",
|
||
" hyperparameters,\n",
|
||
" eval_env,\n",
|
||
" video_fps=30,\n",
|
||
" commit_message=\"Push Reinforce agent to the Hub\",\n",
|
||
" token= None,\n",
|
||
" logs=None\n",
|
||
" ):\n",
|
||
" \"\"\"\n",
|
||
" Evaluate, Generate a video and Upload a model to Hugging Face Hub.\n",
|
||
" This method does the complete pipeline:\n",
|
||
" - It evaluates the model\n",
|
||
" - It generates the model card\n",
|
||
" - It generates a replay video of the agent\n",
|
||
" - It pushes everything to the hub\n",
|
||
" :param repo_id: id of the model repository from the Hugging Face Hub\n",
|
||
" :param model: trained model\n",
|
||
" :param eval_env: environment used to evaluate the agent\n",
|
||
" :param fps: number of fps for rendering the video\n",
|
||
" :param commit_message: commit message\n",
|
||
" :param logs: directory on local machine of tensorboard logs you'd like to upload\n",
|
||
" \"\"\"\n",
|
||
" msg.info(\n",
|
||
" \"This function will save, evaluate, generate a video of your agent, \"\n",
|
||
" \"create a model card and push everything to the hub. \"\n",
|
||
" \"It might take up to 1min. \\n \"\n",
|
||
" \"This is a work in progress: if you encounter a bug, please open an issue.\"\n",
|
||
" )\n",
|
||
" # Step 1: Clone or create the repo\n",
|
||
" repo_url = HfApi().create_repo(\n",
|
||
" repo_id=repo_id,\n",
|
||
" token=token,\n",
|
||
" private=False,\n",
|
||
" exist_ok=True,\n",
|
||
" )\n",
|
||
" \n",
|
||
" with tempfile.TemporaryDirectory() as tmpdirname:\n",
|
||
" tmpdirname = Path(tmpdirname)\n",
|
||
"\n",
|
||
" # Step 2: Save the model\n",
|
||
" torch.save(model.state_dict(), tmpdirname / \"model.pt\")\n",
|
||
" \n",
|
||
" # Step 3: Evaluate the model and build JSON\n",
|
||
" mean_reward, std_reward = _evaluate_agent(eval_env, \n",
|
||
" 10, \n",
|
||
" model)\n",
|
||
"\n",
|
||
" # First get datetime\n",
|
||
" eval_datetime = datetime.datetime.now()\n",
|
||
" eval_form_datetime = eval_datetime.isoformat()\n",
|
||
"\n",
|
||
" evaluate_data = {\n",
|
||
" \"env_id\": hyperparameters.env_id, \n",
|
||
" \"mean_reward\": mean_reward,\n",
|
||
" \"std_reward\": std_reward,\n",
|
||
" \"n_evaluation_episodes\": 10,\n",
|
||
" \"eval_datetime\": eval_form_datetime,\n",
|
||
" }\n",
|
||
" \n",
|
||
" # Write a JSON file\n",
|
||
" with open(tmpdirname / \"results.json\", \"w\") as outfile:\n",
|
||
" json.dump(evaluate_data, outfile)\n",
|
||
"\n",
|
||
" # Step 4: Generate a video\n",
|
||
" video_path = tmpdirname / \"replay.mp4\"\n",
|
||
" record_video(eval_env, model, video_path, video_fps)\n",
|
||
" \n",
|
||
" # Step 5: Generate the model card\n",
|
||
" generated_model_card, metadata = _generate_model_card(\"PPO\", hyperparameters.env_id, mean_reward, std_reward, hyperparameters)\n",
|
||
" _save_model_card(tmpdirname, generated_model_card, metadata)\n",
|
||
"\n",
|
||
" # Step 6: Add logs if needed\n",
|
||
" if logs:\n",
|
||
" _add_logdir(tmpdirname, Path(logs))\n",
|
||
" \n",
|
||
" msg.info(f\"Pushing repo {repo_id} to the Hugging Face Hub\")\n",
|
||
" \n",
|
||
" repo_url = upload_folder(\n",
|
||
" repo_id=repo_id,\n",
|
||
" folder_path=tmpdirname,\n",
|
||
" path_in_repo=\"\",\n",
|
||
" commit_message=commit_message,\n",
|
||
" token=token,\n",
|
||
" )\n",
|
||
"\n",
|
||
" msg.info(f\"Your model is pushed to the Hub. You can view your model here: {repo_url}\")\n",
|
||
" return repo_url\n",
|
||
"\n",
|
||
"def _evaluate_agent(env, n_eval_episodes, policy):\n",
|
||
" \"\"\"\n",
|
||
" Evaluate the agent for ``n_eval_episodes`` episodes and returns average reward and std of reward.\n",
|
||
" :param env: The evaluation environment\n",
|
||
" :param n_eval_episodes: Number of episode to evaluate the agent\n",
|
||
" :param policy: The Reinforce agent\n",
|
||
" \"\"\"\n",
|
||
" episode_rewards = []\n",
|
||
" for episode in range(n_eval_episodes):\n",
|
||
" state = env.reset()\n",
|
||
" step = 0\n",
|
||
" done = False\n",
|
||
" total_rewards_ep = 0\n",
|
||
" \n",
|
||
" while done is False:\n",
|
||
" state = torch.Tensor(state)\n",
|
||
" action, _, _, _ = policy.get_action_and_value(state)\n",
|
||
" new_state, reward, done, info = env.step(action.numpy())\n",
|
||
" total_rewards_ep += reward \n",
|
||
" if done:\n",
|
||
" break\n",
|
||
" state = new_state\n",
|
||
" episode_rewards.append(total_rewards_ep)\n",
|
||
" mean_reward = np.mean(episode_rewards)\n",
|
||
" std_reward = np.std(episode_rewards)\n",
|
||
"\n",
|
||
" return mean_reward, std_reward\n",
|
||
"\n",
|
||
"def record_video(env, policy, out_directory, fps=30):\n",
|
||
" images = [] \n",
|
||
" done = False\n",
|
||
" state = env.reset()\n",
|
||
" img = env.render(mode='rgb_array')\n",
|
||
" images.append(img)\n",
|
||
" while not done:\n",
|
||
" state = torch.Tensor(state)\n",
|
||
" # Take the action (index) that have the maximum expected future reward given that state\n",
|
||
" action, _, _, _ = policy.get_action_and_value(state)\n",
|
||
" state, reward, done, info = env.step(action.numpy()) # We directly put next_state = state for recording logic\n",
|
||
" img = env.render(mode='rgb_array')\n",
|
||
" images.append(img)\n",
|
||
" imageio.mimsave(out_directory, [np.array(img) for i, img in enumerate(images)], fps=fps)\n",
|
||
"\n",
|
||
"def _generate_model_card(model_name, env_id, mean_reward, std_reward, hyperparameters):\n",
|
||
" \"\"\"\n",
|
||
" Generate the model card for the Hub\n",
|
||
" :param model_name: name of the model\n",
|
||
" :env_id: name of the environment\n",
|
||
" :mean_reward: mean reward of the agent\n",
|
||
" :std_reward: standard deviation of the mean reward of the agent\n",
|
||
" :hyperparameters: training arguments\n",
|
||
" \"\"\"\n",
|
||
" # Step 1: Select the tags\n",
|
||
" metadata = generate_metadata(model_name, env_id, mean_reward, std_reward)\n",
|
||
"\n",
|
||
" # Transform the hyperparams namespace to string\n",
|
||
" converted_dict = vars(hyperparameters)\n",
|
||
" converted_str = str(converted_dict)\n",
|
||
" converted_str = converted_str.split(\", \")\n",
|
||
" converted_str = '\\n'.join(converted_str)\n",
|
||
" \n",
|
||
" # Step 2: Generate the model card\n",
|
||
" model_card = f\"\"\"\n",
|
||
" # PPO Agent Playing {env_id}\n",
|
||
"\n",
|
||
" This is a trained model of a PPO agent playing {env_id}.\n",
|
||
" To learn to code your own PPO agent and train it Unit 8 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit8\n",
|
||
" \n",
|
||
" # Hyperparameters\n",
|
||
" ```python\n",
|
||
" {converted_str}\n",
|
||
" ```\n",
|
||
" \"\"\"\n",
|
||
" return model_card, metadata\n",
|
||
"\n",
|
||
"def generate_metadata(model_name, env_id, mean_reward, std_reward):\n",
|
||
" \"\"\"\n",
|
||
" Define the tags for the model card\n",
|
||
" :param model_name: name of the model\n",
|
||
" :param env_id: name of the environment\n",
|
||
" :mean_reward: mean reward of the agent\n",
|
||
" :std_reward: standard deviation of the mean reward of the agent\n",
|
||
" \"\"\"\n",
|
||
" metadata = {}\n",
|
||
" metadata[\"tags\"] = [\n",
|
||
" env_id,\n",
|
||
" \"ppo\",\n",
|
||
" \"deep-reinforcement-learning\",\n",
|
||
" \"reinforcement-learning\",\n",
|
||
" \"custom-implementation\",\n",
|
||
" \"deep-rl-class\"\n",
|
||
" ]\n",
|
||
"\n",
|
||
" # Add metrics\n",
|
||
" eval = metadata_eval_result(\n",
|
||
" model_pretty_name=model_name,\n",
|
||
" task_pretty_name=\"reinforcement-learning\",\n",
|
||
" task_id=\"reinforcement-learning\",\n",
|
||
" metrics_pretty_name=\"mean_reward\",\n",
|
||
" metrics_id=\"mean_reward\",\n",
|
||
" metrics_value=f\"{mean_reward:.2f} +/- {std_reward:.2f}\",\n",
|
||
" dataset_pretty_name=env_id,\n",
|
||
" dataset_id=env_id,\n",
|
||
" )\n",
|
||
"\n",
|
||
" # Merges both dictionaries\n",
|
||
" metadata = {**metadata, **eval}\n",
|
||
"\n",
|
||
" return metadata\n",
|
||
"\n",
|
||
"def _save_model_card(local_path, generated_model_card, metadata):\n",
|
||
" \"\"\"Saves a model card for the repository.\n",
|
||
" :param local_path: repository directory\n",
|
||
" :param generated_model_card: model card generated by _generate_model_card()\n",
|
||
" :param metadata: metadata\n",
|
||
" \"\"\"\n",
|
||
" readme_path = local_path / \"README.md\"\n",
|
||
" readme = \"\"\n",
|
||
" if readme_path.exists():\n",
|
||
" with readme_path.open(\"r\", encoding=\"utf8\") as f:\n",
|
||
" readme = f.read()\n",
|
||
" else:\n",
|
||
" readme = generated_model_card\n",
|
||
"\n",
|
||
" with readme_path.open(\"w\", encoding=\"utf-8\") as f:\n",
|
||
" f.write(readme)\n",
|
||
"\n",
|
||
" # Save our metrics to Readme metadata\n",
|
||
" metadata_save(readme_path, metadata)\n",
|
||
"\n",
|
||
"def _add_logdir(local_path: Path, logdir: Path):\n",
|
||
" \"\"\"Adds a logdir to the repository.\n",
|
||
" :param local_path: repository directory\n",
|
||
" :param logdir: logdir directory\n",
|
||
" \"\"\"\n",
|
||
" if logdir.exists() and logdir.is_dir():\n",
|
||
" # Add the logdir to the repository under new dir called logs\n",
|
||
" repo_logdir = local_path / \"logs\"\n",
|
||
" \n",
|
||
" # Delete current logs if they exist\n",
|
||
" if repo_logdir.exists():\n",
|
||
" shutil.rmtree(repo_logdir)\n",
|
||
"\n",
|
||
" # Copy logdir into repo logdir\n",
|
||
" shutil.copytree(logdir, repo_logdir)\n",
|
||
"\n",
|
||
"def make_env(env_id, seed, idx, capture_video, run_name):\n",
|
||
" def thunk():\n",
|
||
" env = gym.make(env_id)\n",
|
||
" env = gym.wrappers.RecordEpisodeStatistics(env)\n",
|
||
" if capture_video:\n",
|
||
" if idx == 0:\n",
|
||
" env = gym.wrappers.RecordVideo(env, f\"videos/{run_name}\")\n",
|
||
" env.seed(seed)\n",
|
||
" env.action_space.seed(seed)\n",
|
||
" env.observation_space.seed(seed)\n",
|
||
" return env\n",
|
||
"\n",
|
||
" return thunk\n",
|
||
"\n",
|
||
"\n",
|
||
"def layer_init(layer, std=np.sqrt(2), bias_const=0.0):\n",
|
||
" torch.nn.init.orthogonal_(layer.weight, std)\n",
|
||
" torch.nn.init.constant_(layer.bias, bias_const)\n",
|
||
" return layer\n",
|
||
"\n",
|
||
"\n",
|
||
"class Agent(nn.Module):\n",
|
||
" def __init__(self, envs):\n",
|
||
" super().__init__()\n",
|
||
" self.critic = nn.Sequential(\n",
|
||
" layer_init(nn.Linear(np.array(envs.single_observation_space.shape).prod(), 64)),\n",
|
||
" nn.Tanh(),\n",
|
||
" layer_init(nn.Linear(64, 64)),\n",
|
||
" nn.Tanh(),\n",
|
||
" layer_init(nn.Linear(64, 1), std=1.0),\n",
|
||
" )\n",
|
||
" self.actor = nn.Sequential(\n",
|
||
" layer_init(nn.Linear(np.array(envs.single_observation_space.shape).prod(), 64)),\n",
|
||
" nn.Tanh(),\n",
|
||
" layer_init(nn.Linear(64, 64)),\n",
|
||
" nn.Tanh(),\n",
|
||
" layer_init(nn.Linear(64, envs.single_action_space.n), std=0.01),\n",
|
||
" )\n",
|
||
"\n",
|
||
" def get_value(self, x):\n",
|
||
" return self.critic(x)\n",
|
||
"\n",
|
||
" def get_action_and_value(self, x, action=None):\n",
|
||
" logits = self.actor(x)\n",
|
||
" probs = Categorical(logits=logits)\n",
|
||
" if action is None:\n",
|
||
" action = probs.sample()\n",
|
||
" return action, probs.log_prob(action), probs.entropy(), self.critic(x)\n",
|
||
"\n",
|
||
"\n",
|
||
"if __name__ == \"__main__\":\n",
|
||
" args = parse_args()\n",
|
||
" run_name = f\"{args.env_id}__{args.exp_name}__{args.seed}__{int(time.time())}\"\n",
|
||
" if args.track:\n",
|
||
" import wandb\n",
|
||
"\n",
|
||
" wandb.init(\n",
|
||
" project=args.wandb_project_name,\n",
|
||
" entity=args.wandb_entity,\n",
|
||
" sync_tensorboard=True,\n",
|
||
" config=vars(args),\n",
|
||
" name=run_name,\n",
|
||
" monitor_gym=True,\n",
|
||
" save_code=True,\n",
|
||
" )\n",
|
||
" writer = SummaryWriter(f\"runs/{run_name}\")\n",
|
||
" writer.add_text(\n",
|
||
" \"hyperparameters\",\n",
|
||
" \"|param|value|\\n|-|-|\\n%s\" % (\"\\n\".join([f\"|{key}|{value}|\" for key, value in vars(args).items()])),\n",
|
||
" )\n",
|
||
"\n",
|
||
" # TRY NOT TO MODIFY: seeding\n",
|
||
" random.seed(args.seed)\n",
|
||
" np.random.seed(args.seed)\n",
|
||
" torch.manual_seed(args.seed)\n",
|
||
" torch.backends.cudnn.deterministic = args.torch_deterministic\n",
|
||
"\n",
|
||
" device = torch.device(\"cuda\" if torch.cuda.is_available() and args.cuda else \"cpu\")\n",
|
||
"\n",
|
||
" # env setup\n",
|
||
" envs = gym.vector.SyncVectorEnv(\n",
|
||
" [make_env(args.env_id, args.seed + i, i, args.capture_video, run_name) for i in range(args.num_envs)]\n",
|
||
" )\n",
|
||
" assert isinstance(envs.single_action_space, gym.spaces.Discrete), \"only discrete action space is supported\"\n",
|
||
"\n",
|
||
" agent = Agent(envs).to(device)\n",
|
||
" optimizer = optim.Adam(agent.parameters(), lr=args.learning_rate, eps=1e-5)\n",
|
||
"\n",
|
||
" # ALGO Logic: Storage setup\n",
|
||
" obs = torch.zeros((args.num_steps, args.num_envs) + envs.single_observation_space.shape).to(device)\n",
|
||
" actions = torch.zeros((args.num_steps, args.num_envs) + envs.single_action_space.shape).to(device)\n",
|
||
" logprobs = torch.zeros((args.num_steps, args.num_envs)).to(device)\n",
|
||
" rewards = torch.zeros((args.num_steps, args.num_envs)).to(device)\n",
|
||
" dones = torch.zeros((args.num_steps, args.num_envs)).to(device)\n",
|
||
" values = torch.zeros((args.num_steps, args.num_envs)).to(device)\n",
|
||
"\n",
|
||
" # TRY NOT TO MODIFY: start the game\n",
|
||
" global_step = 0\n",
|
||
" start_time = time.time()\n",
|
||
" next_obs = torch.Tensor(envs.reset()).to(device)\n",
|
||
" next_done = torch.zeros(args.num_envs).to(device)\n",
|
||
" num_updates = args.total_timesteps // args.batch_size\n",
|
||
"\n",
|
||
" for update in range(1, num_updates + 1):\n",
|
||
" # Annealing the rate if instructed to do so.\n",
|
||
" if args.anneal_lr:\n",
|
||
" frac = 1.0 - (update - 1.0) / num_updates\n",
|
||
" lrnow = frac * args.learning_rate\n",
|
||
" optimizer.param_groups[0][\"lr\"] = lrnow\n",
|
||
"\n",
|
||
" for step in range(0, args.num_steps):\n",
|
||
" global_step += 1 * args.num_envs\n",
|
||
" obs[step] = next_obs\n",
|
||
" dones[step] = next_done\n",
|
||
"\n",
|
||
" # ALGO LOGIC: action logic\n",
|
||
" with torch.no_grad():\n",
|
||
" action, logprob, _, value = agent.get_action_and_value(next_obs)\n",
|
||
" values[step] = value.flatten()\n",
|
||
" actions[step] = action\n",
|
||
" logprobs[step] = logprob\n",
|
||
"\n",
|
||
" # TRY NOT TO MODIFY: execute the game and log data.\n",
|
||
" next_obs, reward, done, info = envs.step(action.cpu().numpy())\n",
|
||
" rewards[step] = torch.tensor(reward).to(device).view(-1)\n",
|
||
" next_obs, next_done = torch.Tensor(next_obs).to(device), torch.Tensor(done).to(device)\n",
|
||
"\n",
|
||
" for item in info:\n",
|
||
" if \"episode\" in item.keys():\n",
|
||
" print(f\"global_step={global_step}, episodic_return={item['episode']['r']}\")\n",
|
||
" writer.add_scalar(\"charts/episodic_return\", item[\"episode\"][\"r\"], global_step)\n",
|
||
" writer.add_scalar(\"charts/episodic_length\", item[\"episode\"][\"l\"], global_step)\n",
|
||
" break\n",
|
||
"\n",
|
||
" # bootstrap value if not done\n",
|
||
" with torch.no_grad():\n",
|
||
" next_value = agent.get_value(next_obs).reshape(1, -1)\n",
|
||
" if args.gae:\n",
|
||
" advantages = torch.zeros_like(rewards).to(device)\n",
|
||
" lastgaelam = 0\n",
|
||
" for t in reversed(range(args.num_steps)):\n",
|
||
" if t == args.num_steps - 1:\n",
|
||
" nextnonterminal = 1.0 - next_done\n",
|
||
" nextvalues = next_value\n",
|
||
" else:\n",
|
||
" nextnonterminal = 1.0 - dones[t + 1]\n",
|
||
" nextvalues = values[t + 1]\n",
|
||
" delta = rewards[t] + args.gamma * nextvalues * nextnonterminal - values[t]\n",
|
||
" advantages[t] = lastgaelam = delta + args.gamma * args.gae_lambda * nextnonterminal * lastgaelam\n",
|
||
" returns = advantages + values\n",
|
||
" else:\n",
|
||
" returns = torch.zeros_like(rewards).to(device)\n",
|
||
" for t in reversed(range(args.num_steps)):\n",
|
||
" if t == args.num_steps - 1:\n",
|
||
" nextnonterminal = 1.0 - next_done\n",
|
||
" next_return = next_value\n",
|
||
" else:\n",
|
||
" nextnonterminal = 1.0 - dones[t + 1]\n",
|
||
" next_return = returns[t + 1]\n",
|
||
" returns[t] = rewards[t] + args.gamma * nextnonterminal * next_return\n",
|
||
" advantages = returns - values\n",
|
||
"\n",
|
||
" # flatten the batch\n",
|
||
" b_obs = obs.reshape((-1,) + envs.single_observation_space.shape)\n",
|
||
" b_logprobs = logprobs.reshape(-1)\n",
|
||
" b_actions = actions.reshape((-1,) + envs.single_action_space.shape)\n",
|
||
" b_advantages = advantages.reshape(-1)\n",
|
||
" b_returns = returns.reshape(-1)\n",
|
||
" b_values = values.reshape(-1)\n",
|
||
"\n",
|
||
" # Optimizing the policy and value network\n",
|
||
" b_inds = np.arange(args.batch_size)\n",
|
||
" clipfracs = []\n",
|
||
" for epoch in range(args.update_epochs):\n",
|
||
" np.random.shuffle(b_inds)\n",
|
||
" for start in range(0, args.batch_size, args.minibatch_size):\n",
|
||
" end = start + args.minibatch_size\n",
|
||
" mb_inds = b_inds[start:end]\n",
|
||
"\n",
|
||
" _, newlogprob, entropy, newvalue = agent.get_action_and_value(b_obs[mb_inds], b_actions.long()[mb_inds])\n",
|
||
" logratio = newlogprob - b_logprobs[mb_inds]\n",
|
||
" ratio = logratio.exp()\n",
|
||
"\n",
|
||
" with torch.no_grad():\n",
|
||
" # calculate approx_kl http://joschu.net/blog/kl-approx.html\n",
|
||
" old_approx_kl = (-logratio).mean()\n",
|
||
" approx_kl = ((ratio - 1) - logratio).mean()\n",
|
||
" clipfracs += [((ratio - 1.0).abs() > args.clip_coef).float().mean().item()]\n",
|
||
"\n",
|
||
" mb_advantages = b_advantages[mb_inds]\n",
|
||
" if args.norm_adv:\n",
|
||
" mb_advantages = (mb_advantages - mb_advantages.mean()) / (mb_advantages.std() + 1e-8)\n",
|
||
"\n",
|
||
" # Policy loss\n",
|
||
" pg_loss1 = -mb_advantages * ratio\n",
|
||
" pg_loss2 = -mb_advantages * torch.clamp(ratio, 1 - args.clip_coef, 1 + args.clip_coef)\n",
|
||
" pg_loss = torch.max(pg_loss1, pg_loss2).mean()\n",
|
||
"\n",
|
||
" # Value loss\n",
|
||
" newvalue = newvalue.view(-1)\n",
|
||
" if args.clip_vloss:\n",
|
||
" v_loss_unclipped = (newvalue - b_returns[mb_inds]) ** 2\n",
|
||
" v_clipped = b_values[mb_inds] + torch.clamp(\n",
|
||
" newvalue - b_values[mb_inds],\n",
|
||
" -args.clip_coef,\n",
|
||
" args.clip_coef,\n",
|
||
" )\n",
|
||
" v_loss_clipped = (v_clipped - b_returns[mb_inds]) ** 2\n",
|
||
" v_loss_max = torch.max(v_loss_unclipped, v_loss_clipped)\n",
|
||
" v_loss = 0.5 * v_loss_max.mean()\n",
|
||
" else:\n",
|
||
" v_loss = 0.5 * ((newvalue - b_returns[mb_inds]) ** 2).mean()\n",
|
||
"\n",
|
||
" entropy_loss = entropy.mean()\n",
|
||
" loss = pg_loss - args.ent_coef * entropy_loss + v_loss * args.vf_coef\n",
|
||
"\n",
|
||
" optimizer.zero_grad()\n",
|
||
" loss.backward()\n",
|
||
" nn.utils.clip_grad_norm_(agent.parameters(), args.max_grad_norm)\n",
|
||
" optimizer.step()\n",
|
||
"\n",
|
||
" if args.target_kl is not None:\n",
|
||
" if approx_kl > args.target_kl:\n",
|
||
" break\n",
|
||
"\n",
|
||
" y_pred, y_true = b_values.cpu().numpy(), b_returns.cpu().numpy()\n",
|
||
" var_y = np.var(y_true)\n",
|
||
" explained_var = np.nan if var_y == 0 else 1 - np.var(y_true - y_pred) / var_y\n",
|
||
"\n",
|
||
" # TRY NOT TO MODIFY: record rewards for plotting purposes\n",
|
||
" writer.add_scalar(\"charts/learning_rate\", optimizer.param_groups[0][\"lr\"], global_step)\n",
|
||
" writer.add_scalar(\"losses/value_loss\", v_loss.item(), global_step)\n",
|
||
" writer.add_scalar(\"losses/policy_loss\", pg_loss.item(), global_step)\n",
|
||
" writer.add_scalar(\"losses/entropy\", entropy_loss.item(), global_step)\n",
|
||
" writer.add_scalar(\"losses/old_approx_kl\", old_approx_kl.item(), global_step)\n",
|
||
" writer.add_scalar(\"losses/approx_kl\", approx_kl.item(), global_step)\n",
|
||
" writer.add_scalar(\"losses/clipfrac\", np.mean(clipfracs), global_step)\n",
|
||
" writer.add_scalar(\"losses/explained_variance\", explained_var, global_step)\n",
|
||
" print(\"SPS:\", int(global_step / (time.time() - start_time)))\n",
|
||
" writer.add_scalar(\"charts/SPS\", int(global_step / (time.time() - start_time)), global_step)\n",
|
||
"\n",
|
||
" envs.close()\n",
|
||
" writer.close()\n",
|
||
"\n",
|
||
" # Create the evaluation environment\n",
|
||
" eval_env = gym.make(args.env_id)\n",
|
||
"\n",
|
||
" package_to_hub(repo_id = args.repo_id,\n",
|
||
" model = agent, # The model we want to save\n",
|
||
" hyperparameters = args,\n",
|
||
" eval_env = gym.make(args.env_id),\n",
|
||
" logs= f\"runs/{run_name}\",\n",
|
||
" )\n",
|
||
" "
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "lteS7h5znWO3"
|
||
},
|
||
"source": [
|
||
"To be able to share your model with the community there are two more steps to follow:\n",
|
||
"\n",
|
||
"1️⃣ (If it's not already done) create an account to HF ➡ https://huggingface.co/join\n",
|
||
"\n",
|
||
"2️⃣ Sign in and then, you need to store your authentication token from the Hugging Face website.\n",
|
||
"- Create a new token (https://huggingface.co/settings/tokens) **with write role**"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "8sz1AtyenSCS"
|
||
},
|
||
"source": [
|
||
""
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "Fms38KXMncKF"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"from huggingface_hub import notebook_login\n",
|
||
"notebook_login()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "jRqkGvk7pFQ6"
|
||
},
|
||
"source": [
|
||
"### Step 4: Let's start the training 🔥\n",
|
||
"- Now that you've coded from scratch PPO and added the Hugging Face Integration, we're ready to start the training 🔥"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "0tmEArP8ug2l"
|
||
},
|
||
"source": [
|
||
"- First, you need to copy all your code to a file you create called `ppo.py`"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "tjRIs94ku_K8"
|
||
},
|
||
"source": [
|
||
""
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "AvWTiKi9vPbJ"
|
||
},
|
||
"source": [
|
||
""
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "VrS80GmMu_j5"
|
||
},
|
||
"source": [
|
||
"- Now we just need to run this python script using `python <name-of-python-script>.py` with the additional parameters we defined with `argparse`"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {
|
||
"id": "Sa0W5uv0vd-5"
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"!python ppo.py --env-id=\"LunarLander-v2\" --repo-id=\"YOUR_REPO_ID\" --total-timesteps=50000"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "eVsVJ5AdqLE7"
|
||
},
|
||
"source": [
|
||
"## Some additional challenges 🏆\n",
|
||
"In the [Leaderboard](https://huggingface.co/spaces/chrisjay/Deep-Reinforcement-Learning-Leaderboard) you will find your agents. Can you get to the top?\n",
|
||
"\n",
|
||
"Here are some ideas to achieve so:\n",
|
||
"* Train more steps\n",
|
||
"* Try different hyperparameters by looking at what your classmates have done.\n",
|
||
"* **Push your new trained model** on the Hub 🔥\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {
|
||
"id": "nYdl758GqLXT"
|
||
},
|
||
"source": [
|
||
"See you on Next Time! 🔥\n",
|
||
"## Keep learning, stay awesome 🤗"
|
||
]
|
||
}
|
||
],
|
||
"metadata": {
|
||
"colab": {
|
||
"authorship_tag": "ABX9TyN2BU0ytdzum3AwXr4nlJMv",
|
||
"collapsed_sections": [],
|
||
"include_colab_link": true,
|
||
"name": "unit8.ipynb",
|
||
"private_outputs": true,
|
||
"provenance": []
|
||
},
|
||
"gpuClass": "standard",
|
||
"kernelspec": {
|
||
"display_name": "Python 3",
|
||
"name": "python3"
|
||
},
|
||
"language_info": {
|
||
"name": "python"
|
||
}
|
||
},
|
||
"nbformat": 4,
|
||
"nbformat_minor": 0
|
||
}
|