diff --git a/notebooks/unit8/unit8_part1.mdx b/notebooks/unit8/unit8_part1.mdx
deleted file mode 100644
index 0606dac..0000000
--- a/notebooks/unit8/unit8_part1.mdx
+++ /dev/null
@@ -1,1068 +0,0 @@
-
-
-# Unit 8: Proximal Policy Gradient (PPO) with PyTorch 🤖
-
-
-
-
-In this notebook, you'll learn to **code your PPO agent from scratch with PyTorch using CleanRL implementation as model**.
-
-To test its robustness, we're going to train it in:
-
-- [LunarLander-v2 🚀](https://www.gymlibrary.dev/environments/box2d/lunar_lander/)
-
-
-⬇️ Here is an example of what you will achieve. ⬇️
-
-```python
-%%html
-
-```
-
-We're constantly trying to improve our tutorials, so **if you find some issues in this notebook**, please [open an issue on the GitHub Repo](https://github.com/huggingface/deep-rl-class/issues).
-
-## Objectives of this notebook 🏆
-
-At the end of the notebook, you will:
-
-- Be able to **code your PPO agent from scratch using PyTorch**.
-- Be able to **push your trained agent and the code to the Hub** with a nice video replay and an evaluation score 🔥.
-
-
-
-
-## This notebook is from the Deep Reinforcement Learning Course
-
-
-In this free course, you will:
-
-- 📖 Study Deep Reinforcement Learning in **theory and practice**.
-- 🧑💻 Learn to **use famous Deep RL libraries** such as Stable Baselines3, RL Baselines3 Zoo, CleanRL and Sample Factory 2.0.
-- 🤖 Train **agents in unique environments**
-
-Don’t forget to **sign up to the course** (we are collecting your email to be able to **send you the links when each Unit is published and give you information about the challenges and updates).**
-
-
-The best way to keep in touch is to join our discord server to exchange with the community and with us 👉🏻 https://discord.gg/ydHrjt3WP5
-
-## Prerequisites 🏗️
-Before diving into the notebook, you need to:
-
-🔲 📚 Study [PPO by reading Unit 8](https://huggingface.co/deep-rl-course/unit8/introduction) 🤗
-
-To validate this hands-on for the [certification process](https://huggingface.co/deep-rl-course/en/unit0/introduction#certification-process), you need to push one model, we don't ask for a minimal result but we **advise you to try different hyperparameters settings to get better results**.
-
-If you don't find your model, **go to the bottom of the page and click on the refresh button**
-
-For more information about the certification process, check this section 👉 https://huggingface.co/deep-rl-course/en/unit0/introduction#certification-process
-
-## Set the GPU 💪
-- To **accelerate the agent's training, we'll use a GPU**. To do that, go to `Runtime > Change Runtime type`
-
-
-
-- `Hardware Accelerator > GPU`
-
-
-
-## Create a virtual display 🔽
-
-During the notebook, we'll need to generate a replay video. To do so, with colab, **we need to have a virtual screen to be able to render the environment** (and thus record the frames).
-
-Hence the following cell will install the librairies and create and run a virtual screen 🖥
-
-```python
-%%capture
-!apt install python-opengl
-!apt install ffmpeg
-!apt install xvfb
-!pip install pyglet==1.5
-!pip3 install pyvirtualdisplay
-```
-
-```python
-# Virtual display
-from pyvirtualdisplay import Display
-
-virtual_display = Display(visible=0, size=(1400, 900))
-virtual_display.start()
-```
-
-## Install dependencies 🔽
-For this exercise, we use `gym==0.21`
-
-
-```python
-!pip install gym==0.21
-!pip install imageio-ffmpeg
-!pip install huggingface_hub
-!pip install box2d
-```
-
-## Let's code PPO from scratch with Costa Huang tutorial
-- For the core implementation of PPO we're going to use the excellent [Costa Huang](https://costa.sh/) tutorial.
-- In addition to the tutorial, to go deeper you can read the 37 core implementation details: https://iclr-blog-track.github.io/2022/03/25/ppo-implementation-details/
-
-👉 The video tutorial: https://youtu.be/MEt6rrxH8W4
-
-```python
-from IPython.display import HTML
-
-HTML(
- ''
-)
-```
-
-- The best is to code first on the cell below, this way, if you kill the machine **you don't loose the implementation**.
-
-```python
-### Your code here:
-```
-
-## Add the Hugging Face Integration 🤗
-- In order to push our model to the Hub, we need to define a function `package_to_hub`
-
-- Add dependencies we need to push our model to the Hub
-
-```python
-from huggingface_hub import HfApi, upload_folder
-from huggingface_hub.repocard import metadata_eval_result, metadata_save
-
-from pathlib import Path
-import datetime
-import tempfile
-import json
-import shutil
-import imageio
-
-from wasabi import Printer
-
-msg = Printer()
-```
-
-- Add new argument in `parse_args()` function to define the repo-id where we want to push the model.
-
-```python
-# Adding HuggingFace argument
-parser.add_argument(
- "--repo-id",
- type=str,
- default="ThomasSimonini/ppo-CartPole-v1",
- help="id of the model repository from the Hugging Face Hub {username/repo_name}",
-)
-```
-
-- Next, we add the methods needed to push the model to the Hub
-
-- These methods will:
- - `_evalutate_agent()`: evaluate the agent.
- - `_generate_model_card()`: generate the model card of your agent.
- - `_record_video()`: record a video of your agent.
-
-```python
-def package_to_hub(
- repo_id,
- model,
- hyperparameters,
- eval_env,
- video_fps=30,
- commit_message="Push agent to the Hub",
- token=None,
- logs=None,
-):
- """
- Evaluate, Generate a video and Upload a model to Hugging Face Hub.
- This method does the complete pipeline:
- - It evaluates the model
- - It generates the model card
- - It generates a replay video of the agent
- - It pushes everything to the hub
- :param repo_id: id of the model repository from the Hugging Face Hub
- :param model: trained model
- :param eval_env: environment used to evaluate the agent
- :param fps: number of fps for rendering the video
- :param commit_message: commit message
- :param logs: directory on local machine of tensorboard logs you'd like to upload
- """
- msg.info(
- "This function will save, evaluate, generate a video of your agent, "
- "create a model card and push everything to the hub. "
- "It might take up to 1min. \n "
- "This is a work in progress: if you encounter a bug, please open an issue."
- )
- # Step 1: Clone or create the repo
- repo_url = HfApi().create_repo(
- repo_id=repo_id,
- token=token,
- private=False,
- exist_ok=True,
- )
-
- with tempfile.TemporaryDirectory() as tmpdirname:
- tmpdirname = Path(tmpdirname)
-
- # Step 2: Save the model
- torch.save(model.state_dict(), tmpdirname / "model.pt")
-
- # Step 3: Evaluate the model and build JSON
- mean_reward, std_reward = _evaluate_agent(eval_env, 10, model)
-
- # First get datetime
- eval_datetime = datetime.datetime.now()
- eval_form_datetime = eval_datetime.isoformat()
-
- evaluate_data = {
- "env_id": hyperparameters.env_id,
- "mean_reward": mean_reward,
- "std_reward": std_reward,
- "n_evaluation_episodes": 10,
- "eval_datetime": eval_form_datetime,
- }
-
- # Write a JSON file
- with open(tmpdirname / "results.json", "w") as outfile:
- json.dump(evaluate_data, outfile)
-
- # Step 4: Generate a video
- video_path = tmpdirname / "replay.mp4"
- record_video(eval_env, model, video_path, video_fps)
-
- # Step 5: Generate the model card
- generated_model_card, metadata = _generate_model_card(
- "PPO", hyperparameters.env_id, mean_reward, std_reward, hyperparameters
- )
- _save_model_card(tmpdirname, generated_model_card, metadata)
-
- # Step 6: Add logs if needed
- if logs:
- _add_logdir(tmpdirname, Path(logs))
-
- msg.info(f"Pushing repo {repo_id} to the Hugging Face Hub")
-
- repo_url = upload_folder(
- repo_id=repo_id,
- folder_path=tmpdirname,
- path_in_repo="",
- commit_message=commit_message,
- token=token,
- )
-
- msg.info(f"Your model is pushed to the Hub. You can view your model here: {repo_url}")
- return repo_url
-
-
-def _evaluate_agent(env, n_eval_episodes, policy):
- """
- Evaluate the agent for ``n_eval_episodes`` episodes and returns average reward and std of reward.
- :param env: The evaluation environment
- :param n_eval_episodes: Number of episode to evaluate the agent
- :param policy: The agent
- """
- episode_rewards = []
- for episode in range(n_eval_episodes):
- state = env.reset()
- step = 0
- done = False
- total_rewards_ep = 0
-
- while done is False:
- state = torch.Tensor(state).to(device)
- action, _, _, _ = policy.get_action_and_value(state)
- new_state, reward, done, info = env.step(action.cpu().numpy())
- total_rewards_ep += reward
- if done:
- break
- state = new_state
- episode_rewards.append(total_rewards_ep)
- mean_reward = np.mean(episode_rewards)
- std_reward = np.std(episode_rewards)
-
- return mean_reward, std_reward
-
-
-def record_video(env, policy, out_directory, fps=30):
- images = []
- done = False
- state = env.reset()
- img = env.render(mode="rgb_array")
- images.append(img)
- while not done:
- state = torch.Tensor(state).to(device)
- # Take the action (index) that have the maximum expected future reward given that state
- action, _, _, _ = policy.get_action_and_value(state)
- state, reward, done, info = env.step(
- action.cpu().numpy()
- ) # We directly put next_state = state for recording logic
- img = env.render(mode="rgb_array")
- images.append(img)
- imageio.mimsave(out_directory, [np.array(img) for i, img in enumerate(images)], fps=fps)
-
-
-def _generate_model_card(model_name, env_id, mean_reward, std_reward, hyperparameters):
- """
- Generate the model card for the Hub
- :param model_name: name of the model
- :env_id: name of the environment
- :mean_reward: mean reward of the agent
- :std_reward: standard deviation of the mean reward of the agent
- :hyperparameters: training arguments
- """
- # Step 1: Select the tags
- metadata = generate_metadata(model_name, env_id, mean_reward, std_reward)
-
- # Transform the hyperparams namespace to string
- converted_dict = vars(hyperparameters)
- converted_str = str(converted_dict)
- converted_str = converted_str.split(", ")
- converted_str = "\n".join(converted_str)
-
- # Step 2: Generate the model card
- model_card = f"""
- # PPO Agent Playing {env_id}
-
- This is a trained model of a PPO agent playing {env_id}.
-
- # Hyperparameters
- ```python
- {converted_str}
- ```
- """
- return model_card, metadata
-
-
-def generate_metadata(model_name, env_id, mean_reward, std_reward):
- """
- Define the tags for the model card
- :param model_name: name of the model
- :param env_id: name of the environment
- :mean_reward: mean reward of the agent
- :std_reward: standard deviation of the mean reward of the agent
- """
- metadata = {}
- metadata["tags"] = [
- env_id,
- "ppo",
- "deep-reinforcement-learning",
- "reinforcement-learning",
- "custom-implementation",
- "deep-rl-course",
- ]
-
- # Add metrics
- eval = metadata_eval_result(
- model_pretty_name=model_name,
- task_pretty_name="reinforcement-learning",
- task_id="reinforcement-learning",
- metrics_pretty_name="mean_reward",
- metrics_id="mean_reward",
- metrics_value=f"{mean_reward:.2f} +/- {std_reward:.2f}",
- dataset_pretty_name=env_id,
- dataset_id=env_id,
- )
-
- # Merges both dictionaries
- metadata = {**metadata, **eval}
-
- return metadata
-
-
-def _save_model_card(local_path, generated_model_card, metadata):
- """Saves a model card for the repository.
- :param local_path: repository directory
- :param generated_model_card: model card generated by _generate_model_card()
- :param metadata: metadata
- """
- readme_path = local_path / "README.md"
- readme = ""
- if readme_path.exists():
- with readme_path.open("r", encoding="utf8") as f:
- readme = f.read()
- else:
- readme = generated_model_card
-
- with readme_path.open("w", encoding="utf-8") as f:
- f.write(readme)
-
- # Save our metrics to Readme metadata
- metadata_save(readme_path, metadata)
-
-
-def _add_logdir(local_path: Path, logdir: Path):
- """Adds a logdir to the repository.
- :param local_path: repository directory
- :param logdir: logdir directory
- """
- if logdir.exists() and logdir.is_dir():
- # Add the logdir to the repository under new dir called logs
- repo_logdir = local_path / "logs"
-
- # Delete current logs if they exist
- if repo_logdir.exists():
- shutil.rmtree(repo_logdir)
-
- # Copy logdir into repo logdir
- shutil.copytree(logdir, repo_logdir)
-```
-
-- Finally, we call this function at the end of the PPO training
-
-```python
-# Create the evaluation environment
-eval_env = gym.make(args.env_id)
-
-package_to_hub(
- repo_id=args.repo_id,
- model=agent, # The model we want to save
- hyperparameters=args,
- eval_env=gym.make(args.env_id),
- logs=f"runs/{run_name}",
-)
-```
-
-- Here's what look the ppo.py final file
-
-```python
-# docs and experiment results can be found at https://docs.cleanrl.dev/rl-algorithms/ppo/#ppopy
-
-import argparse
-import os
-import random
-import time
-from distutils.util import strtobool
-
-import gym
-import numpy as np
-import torch
-import torch.nn as nn
-import torch.optim as optim
-from torch.distributions.categorical import Categorical
-from torch.utils.tensorboard import SummaryWriter
-
-from huggingface_hub import HfApi, upload_folder
-from huggingface_hub.repocard import metadata_eval_result, metadata_save
-
-from pathlib import Path
-import datetime
-import tempfile
-import json
-import shutil
-import imageio
-
-from wasabi import Printer
-
-msg = Printer()
-
-
-def parse_args():
- # fmt: off
- parser = argparse.ArgumentParser()
- parser.add_argument("--exp-name", type=str, default=os.path.basename(__file__).rstrip(".py"),
- help="the name of this experiment")
- parser.add_argument("--seed", type=int, default=1,
- help="seed of the experiment")
- parser.add_argument("--torch-deterministic", type=lambda x: bool(strtobool(x)), default=True, nargs="?", const=True,
- help="if toggled, `torch.backends.cudnn.deterministic=False`")
- parser.add_argument("--cuda", type=lambda x: bool(strtobool(x)), default=True, nargs="?", const=True,
- help="if toggled, cuda will be enabled by default")
- parser.add_argument("--track", type=lambda x: bool(strtobool(x)), default=False, nargs="?", const=True,
- help="if toggled, this experiment will be tracked with Weights and Biases")
- parser.add_argument("--wandb-project-name", type=str, default="cleanRL",
- help="the wandb's project name")
- parser.add_argument("--wandb-entity", type=str, default=None,
- help="the entity (team) of wandb's project")
- parser.add_argument("--capture-video", type=lambda x: bool(strtobool(x)), default=False, nargs="?", const=True,
- help="weather to capture videos of the agent performances (check out `videos` folder)")
-
- # Algorithm specific arguments
- parser.add_argument("--env-id", type=str, default="CartPole-v1",
- help="the id of the environment")
- parser.add_argument("--total-timesteps", type=int, default=50000,
- help="total timesteps of the experiments")
- parser.add_argument("--learning-rate", type=float, default=2.5e-4,
- help="the learning rate of the optimizer")
- parser.add_argument("--num-envs", type=int, default=4,
- help="the number of parallel game environments")
- parser.add_argument("--num-steps", type=int, default=128,
- help="the number of steps to run in each environment per policy rollout")
- parser.add_argument("--anneal-lr", type=lambda x: bool(strtobool(x)), default=True, nargs="?", const=True,
- help="Toggle learning rate annealing for policy and value networks")
- parser.add_argument("--gae", type=lambda x: bool(strtobool(x)), default=True, nargs="?", const=True,
- help="Use GAE for advantage computation")
- parser.add_argument("--gamma", type=float, default=0.99,
- help="the discount factor gamma")
- parser.add_argument("--gae-lambda", type=float, default=0.95,
- help="the lambda for the general advantage estimation")
- parser.add_argument("--num-minibatches", type=int, default=4,
- help="the number of mini-batches")
- parser.add_argument("--update-epochs", type=int, default=4,
- help="the K epochs to update the policy")
- parser.add_argument("--norm-adv", type=lambda x: bool(strtobool(x)), default=True, nargs="?", const=True,
- help="Toggles advantages normalization")
- parser.add_argument("--clip-coef", type=float, default=0.2,
- help="the surrogate clipping coefficient")
- parser.add_argument("--clip-vloss", type=lambda x: bool(strtobool(x)), default=True, nargs="?", const=True,
- help="Toggles whether or not to use a clipped loss for the value function, as per the paper.")
- parser.add_argument("--ent-coef", type=float, default=0.01,
- help="coefficient of the entropy")
- parser.add_argument("--vf-coef", type=float, default=0.5,
- help="coefficient of the value function")
- parser.add_argument("--max-grad-norm", type=float, default=0.5,
- help="the maximum norm for the gradient clipping")
- parser.add_argument("--target-kl", type=float, default=None,
- help="the target KL divergence threshold")
-
- # Adding HuggingFace argument
- parser.add_argument("--repo-id", type=str, default="ThomasSimonini/ppo-CartPole-v1", help="id of the model repository from the Hugging Face Hub {username/repo_name}")
-
- args = parser.parse_args()
- args.batch_size = int(args.num_envs * args.num_steps)
- args.minibatch_size = int(args.batch_size // args.num_minibatches)
- # fmt: on
- return args
-
-
-def package_to_hub(
- repo_id,
- model,
- hyperparameters,
- eval_env,
- video_fps=30,
- commit_message="Push agent to the Hub",
- token=None,
- logs=None,
-):
- """
- Evaluate, Generate a video and Upload a model to Hugging Face Hub.
- This method does the complete pipeline:
- - It evaluates the model
- - It generates the model card
- - It generates a replay video of the agent
- - It pushes everything to the hub
- :param repo_id: id of the model repository from the Hugging Face Hub
- :param model: trained model
- :param eval_env: environment used to evaluate the agent
- :param fps: number of fps for rendering the video
- :param commit_message: commit message
- :param logs: directory on local machine of tensorboard logs you'd like to upload
- """
- msg.info(
- "This function will save, evaluate, generate a video of your agent, "
- "create a model card and push everything to the hub. "
- "It might take up to 1min. \n "
- "This is a work in progress: if you encounter a bug, please open an issue."
- )
- # Step 1: Clone or create the repo
- repo_url = HfApi().create_repo(
- repo_id=repo_id,
- token=token,
- private=False,
- exist_ok=True,
- )
-
- with tempfile.TemporaryDirectory() as tmpdirname:
- tmpdirname = Path(tmpdirname)
-
- # Step 2: Save the model
- torch.save(model.state_dict(), tmpdirname / "model.pt")
-
- # Step 3: Evaluate the model and build JSON
- mean_reward, std_reward = _evaluate_agent(eval_env, 10, model)
-
- # First get datetime
- eval_datetime = datetime.datetime.now()
- eval_form_datetime = eval_datetime.isoformat()
-
- evaluate_data = {
- "env_id": hyperparameters.env_id,
- "mean_reward": mean_reward,
- "std_reward": std_reward,
- "n_evaluation_episodes": 10,
- "eval_datetime": eval_form_datetime,
- }
-
- # Write a JSON file
- with open(tmpdirname / "results.json", "w") as outfile:
- json.dump(evaluate_data, outfile)
-
- # Step 4: Generate a video
- video_path = tmpdirname / "replay.mp4"
- record_video(eval_env, model, video_path, video_fps)
-
- # Step 5: Generate the model card
- generated_model_card, metadata = _generate_model_card(
- "PPO", hyperparameters.env_id, mean_reward, std_reward, hyperparameters
- )
- _save_model_card(tmpdirname, generated_model_card, metadata)
-
- # Step 6: Add logs if needed
- if logs:
- _add_logdir(tmpdirname, Path(logs))
-
- msg.info(f"Pushing repo {repo_id} to the Hugging Face Hub")
-
- repo_url = upload_folder(
- repo_id=repo_id,
- folder_path=tmpdirname,
- path_in_repo="",
- commit_message=commit_message,
- token=token,
- )
-
- msg.info(f"Your model is pushed to the Hub. You can view your model here: {repo_url}")
- return repo_url
-
-
-def _evaluate_agent(env, n_eval_episodes, policy):
- """
- Evaluate the agent for ``n_eval_episodes`` episodes and returns average reward and std of reward.
- :param env: The evaluation environment
- :param n_eval_episodes: Number of episode to evaluate the agent
- :param policy: The agent
- """
- episode_rewards = []
- for episode in range(n_eval_episodes):
- state = env.reset()
- step = 0
- done = False
- total_rewards_ep = 0
-
- while done is False:
- state = torch.Tensor(state).to(device)
- action, _, _, _ = policy.get_action_and_value(state)
- new_state, reward, done, info = env.step(action.cpu().numpy())
- total_rewards_ep += reward
- if done:
- break
- state = new_state
- episode_rewards.append(total_rewards_ep)
- mean_reward = np.mean(episode_rewards)
- std_reward = np.std(episode_rewards)
-
- return mean_reward, std_reward
-
-
-def record_video(env, policy, out_directory, fps=30):
- images = []
- done = False
- state = env.reset()
- img = env.render(mode="rgb_array")
- images.append(img)
- while not done:
- state = torch.Tensor(state).to(device)
- # Take the action (index) that have the maximum expected future reward given that state
- action, _, _, _ = policy.get_action_and_value(state)
- state, reward, done, info = env.step(
- action.cpu().numpy()
- ) # We directly put next_state = state for recording logic
- img = env.render(mode="rgb_array")
- images.append(img)
- imageio.mimsave(out_directory, [np.array(img) for i, img in enumerate(images)], fps=fps)
-
-
-def _generate_model_card(model_name, env_id, mean_reward, std_reward, hyperparameters):
- """
- Generate the model card for the Hub
- :param model_name: name of the model
- :env_id: name of the environment
- :mean_reward: mean reward of the agent
- :std_reward: standard deviation of the mean reward of the agent
- :hyperparameters: training arguments
- """
- # Step 1: Select the tags
- metadata = generate_metadata(model_name, env_id, mean_reward, std_reward)
-
- # Transform the hyperparams namespace to string
- converted_dict = vars(hyperparameters)
- converted_str = str(converted_dict)
- converted_str = converted_str.split(", ")
- converted_str = "\n".join(converted_str)
-
- # Step 2: Generate the model card
- model_card = f"""
- # PPO Agent Playing {env_id}
-
- This is a trained model of a PPO agent playing {env_id}.
-
- # Hyperparameters
- ```python
- {converted_str}
- ```
- """
- return model_card, metadata
-
-
-def generate_metadata(model_name, env_id, mean_reward, std_reward):
- """
- Define the tags for the model card
- :param model_name: name of the model
- :param env_id: name of the environment
- :mean_reward: mean reward of the agent
- :std_reward: standard deviation of the mean reward of the agent
- """
- metadata = {}
- metadata["tags"] = [
- env_id,
- "ppo",
- "deep-reinforcement-learning",
- "reinforcement-learning",
- "custom-implementation",
- "deep-rl-course",
- ]
-
- # Add metrics
- eval = metadata_eval_result(
- model_pretty_name=model_name,
- task_pretty_name="reinforcement-learning",
- task_id="reinforcement-learning",
- metrics_pretty_name="mean_reward",
- metrics_id="mean_reward",
- metrics_value=f"{mean_reward:.2f} +/- {std_reward:.2f}",
- dataset_pretty_name=env_id,
- dataset_id=env_id,
- )
-
- # Merges both dictionaries
- metadata = {**metadata, **eval}
-
- return metadata
-
-
-def _save_model_card(local_path, generated_model_card, metadata):
- """Saves a model card for the repository.
- :param local_path: repository directory
- :param generated_model_card: model card generated by _generate_model_card()
- :param metadata: metadata
- """
- readme_path = local_path / "README.md"
- readme = ""
- if readme_path.exists():
- with readme_path.open("r", encoding="utf8") as f:
- readme = f.read()
- else:
- readme = generated_model_card
-
- with readme_path.open("w", encoding="utf-8") as f:
- f.write(readme)
-
- # Save our metrics to Readme metadata
- metadata_save(readme_path, metadata)
-
-
-def _add_logdir(local_path: Path, logdir: Path):
- """Adds a logdir to the repository.
- :param local_path: repository directory
- :param logdir: logdir directory
- """
- if logdir.exists() and logdir.is_dir():
- # Add the logdir to the repository under new dir called logs
- repo_logdir = local_path / "logs"
-
- # Delete current logs if they exist
- if repo_logdir.exists():
- shutil.rmtree(repo_logdir)
-
- # Copy logdir into repo logdir
- shutil.copytree(logdir, repo_logdir)
-
-
-def make_env(env_id, seed, idx, capture_video, run_name):
- def thunk():
- env = gym.make(env_id)
- env = gym.wrappers.RecordEpisodeStatistics(env)
- if capture_video:
- if idx == 0:
- env = gym.wrappers.RecordVideo(env, f"videos/{run_name}")
- env.seed(seed)
- env.action_space.seed(seed)
- env.observation_space.seed(seed)
- return env
-
- return thunk
-
-
-def layer_init(layer, std=np.sqrt(2), bias_const=0.0):
- torch.nn.init.orthogonal_(layer.weight, std)
- torch.nn.init.constant_(layer.bias, bias_const)
- return layer
-
-
-class Agent(nn.Module):
- def __init__(self, envs):
- super().__init__()
- self.critic = nn.Sequential(
- layer_init(nn.Linear(np.array(envs.single_observation_space.shape).prod(), 64)),
- nn.Tanh(),
- layer_init(nn.Linear(64, 64)),
- nn.Tanh(),
- layer_init(nn.Linear(64, 1), std=1.0),
- )
- self.actor = nn.Sequential(
- layer_init(nn.Linear(np.array(envs.single_observation_space.shape).prod(), 64)),
- nn.Tanh(),
- layer_init(nn.Linear(64, 64)),
- nn.Tanh(),
- layer_init(nn.Linear(64, envs.single_action_space.n), std=0.01),
- )
-
- def get_value(self, x):
- return self.critic(x)
-
- def get_action_and_value(self, x, action=None):
- logits = self.actor(x)
- probs = Categorical(logits=logits)
- if action is None:
- action = probs.sample()
- return action, probs.log_prob(action), probs.entropy(), self.critic(x)
-
-
-if __name__ == "__main__":
- args = parse_args()
- run_name = f"{args.env_id}__{args.exp_name}__{args.seed}__{int(time.time())}"
- if args.track:
- import wandb
-
- wandb.init(
- project=args.wandb_project_name,
- entity=args.wandb_entity,
- sync_tensorboard=True,
- config=vars(args),
- name=run_name,
- monitor_gym=True,
- save_code=True,
- )
- writer = SummaryWriter(f"runs/{run_name}")
- writer.add_text(
- "hyperparameters",
- "|param|value|\n|-|-|\n%s" % ("\n".join([f"|{key}|{value}|" for key, value in vars(args).items()])),
- )
-
- # TRY NOT TO MODIFY: seeding
- random.seed(args.seed)
- np.random.seed(args.seed)
- torch.manual_seed(args.seed)
- torch.backends.cudnn.deterministic = args.torch_deterministic
-
- device = torch.device("cuda" if torch.cuda.is_available() and args.cuda else "cpu")
-
- # env setup
- envs = gym.vector.SyncVectorEnv(
- [make_env(args.env_id, args.seed + i, i, args.capture_video, run_name) for i in range(args.num_envs)]
- )
- assert isinstance(envs.single_action_space, gym.spaces.Discrete), "only discrete action space is supported"
-
- agent = Agent(envs).to(device)
- optimizer = optim.Adam(agent.parameters(), lr=args.learning_rate, eps=1e-5)
-
- # ALGO Logic: Storage setup
- obs = torch.zeros((args.num_steps, args.num_envs) + envs.single_observation_space.shape).to(device)
- actions = torch.zeros((args.num_steps, args.num_envs) + envs.single_action_space.shape).to(device)
- logprobs = torch.zeros((args.num_steps, args.num_envs)).to(device)
- rewards = torch.zeros((args.num_steps, args.num_envs)).to(device)
- dones = torch.zeros((args.num_steps, args.num_envs)).to(device)
- values = torch.zeros((args.num_steps, args.num_envs)).to(device)
-
- # TRY NOT TO MODIFY: start the game
- global_step = 0
- start_time = time.time()
- next_obs = torch.Tensor(envs.reset()).to(device)
- next_done = torch.zeros(args.num_envs).to(device)
- num_updates = args.total_timesteps // args.batch_size
-
- for update in range(1, num_updates + 1):
- # Annealing the rate if instructed to do so.
- if args.anneal_lr:
- frac = 1.0 - (update - 1.0) / num_updates
- lrnow = frac * args.learning_rate
- optimizer.param_groups[0]["lr"] = lrnow
-
- for step in range(0, args.num_steps):
- global_step += 1 * args.num_envs
- obs[step] = next_obs
- dones[step] = next_done
-
- # ALGO LOGIC: action logic
- with torch.no_grad():
- action, logprob, _, value = agent.get_action_and_value(next_obs)
- values[step] = value.flatten()
- actions[step] = action
- logprobs[step] = logprob
-
- # TRY NOT TO MODIFY: execute the game and log data.
- next_obs, reward, done, info = envs.step(action.cpu().numpy())
- rewards[step] = torch.tensor(reward).to(device).view(-1)
- next_obs, next_done = torch.Tensor(next_obs).to(device), torch.Tensor(done).to(device)
-
- for item in info:
- if "episode" in item.keys():
- print(f"global_step={global_step}, episodic_return={item['episode']['r']}")
- writer.add_scalar("charts/episodic_return", item["episode"]["r"], global_step)
- writer.add_scalar("charts/episodic_length", item["episode"]["l"], global_step)
- break
-
- # bootstrap value if not done
- with torch.no_grad():
- next_value = agent.get_value(next_obs).reshape(1, -1)
- if args.gae:
- advantages = torch.zeros_like(rewards).to(device)
- lastgaelam = 0
- for t in reversed(range(args.num_steps)):
- if t == args.num_steps - 1:
- nextnonterminal = 1.0 - next_done
- nextvalues = next_value
- else:
- nextnonterminal = 1.0 - dones[t + 1]
- nextvalues = values[t + 1]
- delta = rewards[t] + args.gamma * nextvalues * nextnonterminal - values[t]
- advantages[t] = lastgaelam = delta + args.gamma * args.gae_lambda * nextnonterminal * lastgaelam
- returns = advantages + values
- else:
- returns = torch.zeros_like(rewards).to(device)
- for t in reversed(range(args.num_steps)):
- if t == args.num_steps - 1:
- nextnonterminal = 1.0 - next_done
- next_return = next_value
- else:
- nextnonterminal = 1.0 - dones[t + 1]
- next_return = returns[t + 1]
- returns[t] = rewards[t] + args.gamma * nextnonterminal * next_return
- advantages = returns - values
-
- # flatten the batch
- b_obs = obs.reshape((-1,) + envs.single_observation_space.shape)
- b_logprobs = logprobs.reshape(-1)
- b_actions = actions.reshape((-1,) + envs.single_action_space.shape)
- b_advantages = advantages.reshape(-1)
- b_returns = returns.reshape(-1)
- b_values = values.reshape(-1)
-
- # Optimizing the policy and value network
- b_inds = np.arange(args.batch_size)
- clipfracs = []
- for epoch in range(args.update_epochs):
- np.random.shuffle(b_inds)
- for start in range(0, args.batch_size, args.minibatch_size):
- end = start + args.minibatch_size
- mb_inds = b_inds[start:end]
-
- _, newlogprob, entropy, newvalue = agent.get_action_and_value(
- b_obs[mb_inds], b_actions.long()[mb_inds]
- )
- logratio = newlogprob - b_logprobs[mb_inds]
- ratio = logratio.exp()
-
- with torch.no_grad():
- # calculate approx_kl http://joschu.net/blog/kl-approx.html
- old_approx_kl = (-logratio).mean()
- approx_kl = ((ratio - 1) - logratio).mean()
- clipfracs += [((ratio - 1.0).abs() > args.clip_coef).float().mean().item()]
-
- mb_advantages = b_advantages[mb_inds]
- if args.norm_adv:
- mb_advantages = (mb_advantages - mb_advantages.mean()) / (mb_advantages.std() + 1e-8)
-
- # Policy loss
- pg_loss1 = -mb_advantages * ratio
- pg_loss2 = -mb_advantages * torch.clamp(ratio, 1 - args.clip_coef, 1 + args.clip_coef)
- pg_loss = torch.max(pg_loss1, pg_loss2).mean()
-
- # Value loss
- newvalue = newvalue.view(-1)
- if args.clip_vloss:
- v_loss_unclipped = (newvalue - b_returns[mb_inds]) ** 2
- v_clipped = b_values[mb_inds] + torch.clamp(
- newvalue - b_values[mb_inds],
- -args.clip_coef,
- args.clip_coef,
- )
- v_loss_clipped = (v_clipped - b_returns[mb_inds]) ** 2
- v_loss_max = torch.max(v_loss_unclipped, v_loss_clipped)
- v_loss = 0.5 * v_loss_max.mean()
- else:
- v_loss = 0.5 * ((newvalue - b_returns[mb_inds]) ** 2).mean()
-
- entropy_loss = entropy.mean()
- loss = pg_loss - args.ent_coef * entropy_loss + v_loss * args.vf_coef
-
- optimizer.zero_grad()
- loss.backward()
- nn.utils.clip_grad_norm_(agent.parameters(), args.max_grad_norm)
- optimizer.step()
-
- if args.target_kl is not None:
- if approx_kl > args.target_kl:
- break
-
- y_pred, y_true = b_values.cpu().numpy(), b_returns.cpu().numpy()
- var_y = np.var(y_true)
- explained_var = np.nan if var_y == 0 else 1 - np.var(y_true - y_pred) / var_y
-
- # TRY NOT TO MODIFY: record rewards for plotting purposes
- writer.add_scalar("charts/learning_rate", optimizer.param_groups[0]["lr"], global_step)
- writer.add_scalar("losses/value_loss", v_loss.item(), global_step)
- writer.add_scalar("losses/policy_loss", pg_loss.item(), global_step)
- writer.add_scalar("losses/entropy", entropy_loss.item(), global_step)
- writer.add_scalar("losses/old_approx_kl", old_approx_kl.item(), global_step)
- writer.add_scalar("losses/approx_kl", approx_kl.item(), global_step)
- writer.add_scalar("losses/clipfrac", np.mean(clipfracs), global_step)
- writer.add_scalar("losses/explained_variance", explained_var, global_step)
- print("SPS:", int(global_step / (time.time() - start_time)))
- writer.add_scalar("charts/SPS", int(global_step / (time.time() - start_time)), global_step)
-
- envs.close()
- writer.close()
-
- # Create the evaluation environment
- eval_env = gym.make(args.env_id)
-
- package_to_hub(
- repo_id=args.repo_id,
- model=agent, # The model we want to save
- hyperparameters=args,
- eval_env=gym.make(args.env_id),
- logs=f"runs/{run_name}",
- )
-```
-
-To be able to share your model with the community there are three more steps to follow:
-
-1️⃣ (If it's not already done) create an account to HF ➡ https://huggingface.co/join
-
-2️⃣ Sign in and then, you need to store your authentication token from the Hugging Face website.
-- Create a new token (https://huggingface.co/settings/tokens) **with write role**
-
-
-
-- Copy the token
-- Run the cell below and paste the token
-
-```python
-from huggingface_hub import notebook_login
-notebook_login()
-!git config --global credential.helper store
-```
-
-If you don't want to use a Google Colab or a Jupyter Notebook, you need to use this command instead: `huggingface-cli login`
-
-## Let's start the training 🔥
-- Now that you've coded from scratch PPO and added the Hugging Face Integration, we're ready to start the training 🔥
-
-- First, you need to copy all your code to a file you create called `ppo.py`
-
-
-
-
-
-- Now we just need to run this python script using `python .py` with the additional parameters we defined with `argparse`
-
-- You should modify more hyperparameters otherwise the training will not be super stable.
-
-```python
-!python ppo.py --env-id="LunarLander-v2" --repo-id="YOUR_REPO_ID" --total-timesteps=50000
-```
-
-## Some additional challenges 🏆
-The best way to learn **is to try things by your own**! Why not trying another environment?
-
-
-See you on Unit 8, part 2 where we going to train agents to play Doom 🔥
-## Keep learning, stay awesome 🤗
\ No newline at end of file
diff --git a/units/en/unit8/hands-on-cleanrl.mdx b/units/en/unit8/hands-on-cleanrl.mdx
index 65a1270..88d1033 100644
--- a/units/en/unit8/hands-on-cleanrl.mdx
+++ b/units/en/unit8/hands-on-cleanrl.mdx
@@ -39,8 +39,9 @@ LunarLander-v2 is the first environment you used when you started this course. A
Let's get started! 🚀
-
+The colab notebook:
+[](https://colab.research.google.com/github/huggingface/deep-rl-class/blob/master/notebooks/unit8/unit8_part1.ipynb)
# Unit 8: Proximal Policy Gradient (PPO) with PyTorch 🤖
diff --git a/units/en/unit8/hands-on-sf.mdx b/units/en/unit8/hands-on-sf.mdx
index a93b860..6bde46e 100644
--- a/units/en/unit8/hands-on-sf.mdx
+++ b/units/en/unit8/hands-on-sf.mdx
@@ -1 +1,430 @@
-# Hands-on Doom
+# Hands-on: advanced Deep Reinforcement Learning. Using Sample Factory to play Doom from pixels
+
+
+
+The colab notebook:
+[](https://colab.research.google.com/github/huggingface/deep-rl-class/blob/master/notebooks/unit8/unit8_part2.ipynb)
+
+# Unit 8 Part 2: Advanced Deep Reinforcement Learning. Using Sample Factory to play Doom from pixels
+
+
+
+In this notebook, we will learn how to train a Deep Neural Network to collect objects in a 3D environment based on the game of Doom, a video of the resulting policy is shown below. We train this policy using [Sample Factory](https://www.samplefactory.dev/), an asynchronous implementation of the PPO algorithm.
+
+Please note the following points:
+
+* [Sample Factory](https://www.samplefactory.dev/) is an advanced RL framework and **only functions on Linux and Mac** (not Windows).
+
+* The framework performs best on a **GPU machine with many CPU cores**, where it can achieve speeds of 100k interactions per second. The resources available on a standard Colab notebook **limit the performance of this library**. So the speed in this setting **does not reflect the real-world performance**.
+* Benchmarks for Sample Factory are available in a number of settings, check out the [examples](https://github.com/alex-petrenko/sample-factory/tree/master/sf_examples) if you want to find out more.
+
+
+```python
+from IPython.display import HTML
+
+HTML(
+ """"""
+)
+```
+
+To validate this hands-on for the [certification process](https://huggingface.co/deep-rl-course/en/unit0/introduction#certification-process), you need to push one model:
+
+- `doom_health_gathering_supreme` get a result of >= 5.
+
+To find your result, go to the [leaderboard](https://huggingface.co/spaces/huggingface-projects/Deep-Reinforcement-Learning-Leaderboard) and find your model, **the result = mean_reward - std of reward**
+
+If you don't find your model, **go to the bottom of the page and click on the refresh button**
+
+For more information about the certification process, check this section 👉 https://huggingface.co/deep-rl-course/en/unit0/introduction#certification-process
+
+## Set the GPU 💪
+
+- To **accelerate the agent's training, we'll use a GPU**. To do that, go to `Runtime > Change Runtime type`
+
+
+
+- `Hardware Accelerator > GPU`
+
+
+
+Before starting to train our agent, let's **study the library and environments we're going to use**.
+
+## Sample Factory
+
+[Sample Factory](https://www.samplefactory.dev/) is one of the **fastest RL libraries focused on very efficient synchronous and asynchronous implementations of policy gradients (PPO)**.
+
+Sample Factory is thoroughly **tested, used by many researchers and practitioners**, and is actively maintained. Our implementation is known to **reach SOTA performance in a variety of domains while minimizing RL experiment training time and hardware requirements**.
+
+
+
+### Key features
+
+- Highly optimized algorithm [architecture](https://www.samplefactory.dev/06-architecture/overview/) for maximum learning throughput
+- [Synchronous and asynchronous](https://www.samplefactory.dev/07-advanced-topics/sync-async/) training regimes
+- [Serial (single-process) mode](https://www.samplefactory.dev/07-advanced-topics/serial-mode/) for easy debugging
+- Optimal performance in both CPU-based and [GPU-accelerated environments](https://www.samplefactory.dev/09-environment-integrations/isaacgym/)
+- Single- & multi-agent training, self-play, supports [training multiple policies](https://www.samplefactory.dev/07-advanced-topics/multi-policy-training/) at once on one or many GPUs
+- Population-Based Training ([PBT](https://www.samplefactory.dev/07-advanced-topics/pbt/))
+- Discrete, continuous, hybrid action spaces
+- Vector-based, image-based, dictionary observation spaces
+- Automatically creates a model architecture by parsing action/observation space specification. Supports [custom model architectures](https://www.samplefactory.dev/03-customization/custom-models/)
+- Designed to be imported into other projects, [custom environments](https://www.samplefactory.dev/03-customization/custom-environments/) are first-class citizens
+- Detailed [WandB and Tensorboard summaries](https://www.samplefactory.dev/05-monitoring/metrics-reference/), [custom metrics](https://www.samplefactory.dev/05-monitoring/custom-metrics/)
+- [HuggingFace 🤗 integration](https://www.samplefactory.dev/10-huggingface/huggingface/) (upload trained models and metrics to the Hub)
+- [Multiple](https://www.samplefactory.dev/09-environment-integrations/mujoco/) [example](https://www.samplefactory.dev/09-environment-integrations/atari/) [environment](https://www.samplefactory.dev/09-environment-integrations/vizdoom/) [integrations](https://www.samplefactory.dev/09-environment-integrations/dmlab/) with tuned parameters and trained models
+
+All of the above policies are available on the 🤗 hub. Search for the tag [sample-factory](https://huggingface.co/models?library=sample-factory&sort=downloads)
+
+### How sample-factory works
+
+Sample-factory is one of the **most highly optimized RL implementations available to the community**.
+
+It works by **spawning multiple processes that run rollout workers, inference workers and a learner worker**.
+
+The *workers* **communicate through shared memory, which lowers the communication cost between processes**.
+
+The *rollout workers* interact with the environment and send observations to the *inference workers*.
+
+The *inferences workers* query a fixed version of the policy and **send actions back to the rollout worker**.
+
+After *k* steps the rollout works send a trajectory of experience to the learner worker, **which it uses to update the agent’s policy network**.
+
+
+
+### Actor Critic models in Sample-factory
+
+Actor Critic models in Sample Factory are composed of three components:
+
+- **Encoder** - Process input observations (images, vectors) and map them to a vector. This is the part of the model you will most likely want to customize.
+- **Core** - Intergrate vectors from one or more encoders, can optionally include a single- or multi-layer LSTM/GRU in a memory-based agent.
+- **Decoder** - Apply additional layers to the output of the model core before computing the policy and value outputs.
+
+The library has been designed to automatically support any observation and action spaces. Users can easily add their custom models. You can find out more in the [documentation](https://www.samplefactory.dev/03-customization/custom-models/#actor-critic-models-in-sample-factory).
+
+## ViZDoom
+
+[ViZDoom](https://vizdoom.cs.put.edu.pl/) is an **open-source python interface for the Doom Engine**.
+
+The library was created in 2016 by Marek Wydmuch, Michal Kempka at the Institute of Computing Science, Poznan University of Technology, Poland.
+
+The library enables the **training of agents directly from the screen pixels in a number of scenarios**, including team deathmatch, shown in the video below. Because the ViZDoom environment is based on a game the was created in the 90s, it can be run on modern hardware at accelerated speeds, **allowing us to learn complex AI behaviors fairly quickly**.
+
+The library includes feature such as:
+
+- Multi-platform (Linux, macOS, Windows),
+- API for Python and C++,
+- [OpenAI Gym](https://www.gymlibrary.dev/) environment wrappers
+- Easy-to-create custom scenarios (visual editors, scripting language, and examples available),
+- Async and sync single-player and multiplayer modes,
+- Lightweight (few MBs) and fast (up to 7000 fps in sync mode, single-threaded),
+- Customizable resolution and rendering parameters,
+- Access to the depth buffer (3D vision),
+- Automatic labeling of game objects visible in the frame,
+- Access to the audio buffer
+- Access to the list of actors/objects and map geometry,
+- Off-screen rendering and episode recording,
+- Time scaling in async mode.
+
+## We first need to install some dependencies that are required for the ViZDoom environment
+
+Now that our Colab runtime is set up, we can start by installing the dependencies required to run ViZDoom on linux.
+
+If you are following on your machine on Mac, you will want to follow the installation instructions on the [github page](https://github.com/Farama-Foundation/ViZDoom/blob/master/doc/Quickstart.md#-quickstart-for-macos-and-anaconda3-python-36).
+
+```python
+# Install ViZDoom deps from
+# https://github.com/mwydmuch/ViZDoom/blob/master/doc/Building.md#-linux
+
+apt-get install build-essential zlib1g-dev libsdl2-dev libjpeg-dev \
+nasm tar libbz2-dev libgtk2.0-dev cmake git libfluidsynth-dev libgme-dev \
+libopenal-dev timidity libwildmidi-dev unzip ffmpeg
+
+# Boost libraries
+apt-get install libboost-all-dev
+
+# Lua binding dependencies
+apt-get install liblua5.1-dev
+```
+
+## Then we can install Sample Factory and ViZDoom
+
+- This can take 7min
+
+```bash
+pip install sample-factory
+pip install vizdoom
+```
+
+## Setting up the Doom Environment in sample-factory
+
+```python
+import functools
+
+from sample_factory.algo.utils.context import global_model_factory
+from sample_factory.cfg.arguments import parse_full_cfg, parse_sf_args
+from sample_factory.envs.env_utils import register_env
+from sample_factory.train import run_rl
+
+from sf_examples.vizdoom.doom.doom_model import make_vizdoom_encoder
+from sf_examples.vizdoom.doom.doom_params import add_doom_env_args, doom_override_defaults
+from sf_examples.vizdoom.doom.doom_utils import DOOM_ENVS, make_doom_env_from_spec
+
+
+# Registers all the ViZDoom environments
+def register_vizdoom_envs():
+ for env_spec in DOOM_ENVS:
+ make_env_func = functools.partial(make_doom_env_from_spec, env_spec)
+ register_env(env_spec.name, make_env_func)
+
+
+# Sample Factory allows the registration of a custom Neural Network architecture
+# See https://github.com/alex-petrenko/sample-factory/blob/master/sf_examples/vizdoom/doom/doom_model.py for more details
+def register_vizdoom_models():
+ global_model_factory().register_encoder_factory(make_vizdoom_encoder)
+
+
+def register_vizdoom_components():
+ register_vizdoom_envs()
+ register_vizdoom_models()
+
+
+# parse the command line args and create a config
+def parse_vizdoom_cfg(argv=None, evaluation=False):
+ parser, _ = parse_sf_args(argv=argv, evaluation=evaluation)
+ # parameters specific to Doom envs
+ add_doom_env_args(parser)
+ # override Doom default values for algo parameters
+ doom_override_defaults(parser)
+ # second parsing pass yields the final configuration
+ final_cfg = parse_full_cfg(parser, argv)
+ return final_cfg
+```
+
+Now that the setup if complete, we can train the agent. We have chosen here to learn a ViZDoom task called `Health Gathering Supreme`.
+
+### The scenario: Health Gathering Supreme
+
+
+
+
+
+The objective of this scenario is to **teach the agent how to survive without knowing what makes him survive**. Agent know only that **life is precious** and death is bad so **it must learn what prolongs his existence and that his health is connected with it**.
+
+Map is a rectangle containing walls and with a green, acidic floor which **hurts the player periodically**. Initially there are some medkits spread uniformly over the map. A new medkit falls from the skies every now and then. **Medkits heal some portions of player's health** - to survive agent needs to pick them up. Episode finishes after player's death or on timeout.
+
+Further configuration:
+- Living_reward = 1
+- 3 available buttons: turn left, turn right, move forward
+- 1 available game variable: HEALTH
+- death penalty = 100
+
+You can find out more about the scenarios available in ViZDoom [here](https://github.com/Farama-Foundation/ViZDoom/tree/master/scenarios).
+
+There are also a number of more complex scenarios that have been create for ViZDoom, such as the ones detailed on [this github page](https://github.com/edbeeching/3d_control_deep_rl).
+
+
+
+## Training the agent
+
+- We're going to train the agent for 4000000 steps it will take approximately 20min
+
+```python
+## Start the training, this should take around 15 minutes
+register_vizdoom_components()
+
+# The scenario we train on today is health gathering
+# other scenarios include "doom_basic", "doom_two_colors_easy", "doom_dm", "doom_dwango5", "doom_my_way_home", "doom_deadly_corridor", "doom_defend_the_center", "doom_defend_the_line"
+env = "doom_health_gathering_supreme"
+cfg = parse_vizdoom_cfg(
+ argv=[f"--env={env}", "--num_workers=8", "--num_envs_per_worker=4", "--train_for_env_steps=4000000"]
+)
+
+status = run_rl(cfg)
+```
+
+## Let's take a look at the performance of the trained policy and output a video of the agent.
+
+```python
+from sample_factory.enjoy import enjoy
+
+cfg = parse_vizdoom_cfg(
+ argv=[f"--env={env}", "--num_workers=1", "--save_video", "--no_render", "--max_num_episodes=10"], evaluation=True
+)
+status = enjoy(cfg)
+```
+
+## Now lets visualize the performance of the agent
+
+```python
+from base64 import b64encode
+from IPython.display import HTML
+
+mp4 = open("/content/train_dir/default_experiment/replay.mp4", "rb").read()
+data_url = "data:video/mp4;base64," + b64encode(mp4).decode()
+HTML(
+ """
+
+"""
+ % data_url
+)
+```
+
+The agent has learned something, but its performance could be better. We would clearly need to train for longer. But let's upload this model to the Hub.
+
+## Now lets upload your checkpoint and video to the Hugging Face Hub
+
+
+
+
+To be able to share your model with the community there are three more steps to follow:
+
+1️⃣ (If it's not already done) create an account to HF ➡ https://huggingface.co/join
+
+2️⃣ Sign in and then, you need to store your authentication token from the Hugging Face website.
+- Create a new token (https://huggingface.co/settings/tokens) **with write role**
+
+
+
+- Copy the token
+- Run the cell below and paste the token
+
+If you don't want to use a Google Colab or a Jupyter Notebook, you need to use this command instead: `huggingface-cli login`
+
+```python
+from huggingface_hub import notebook_login
+notebook_login()
+!git config --global credential.helper store
+```
+
+```python
+from sample_factory.enjoy import enjoy
+
+hf_username = "ThomasSimonini" # insert your HuggingFace username here
+
+cfg = parse_vizdoom_cfg(
+ argv=[
+ f"--env={env}",
+ "--num_workers=1",
+ "--save_video",
+ "--no_render",
+ "--max_num_episodes=10",
+ "--max_num_frames=100000",
+ "--push_to_hub",
+ f"--hf_repository={hf_username}/rl_course_vizdoom_health_gathering_supreme",
+ ],
+ evaluation=True,
+)
+status = enjoy(cfg)
+```
+
+## Let's load another model
+
+
+
+
+This agent's performance was good, but can do better! Let's download and visualize an agent trained for 10B timesteps from the hub.
+
+```bash
+#download the agent from the hub
+python -m sample_factory.huggingface.load_from_hub -r edbeeching/doom_health_gathering_supreme_2222 -d ./train_dir
+```
+
+```bash
+ls train_dir/doom_health_gathering_supreme_2222
+```
+
+```python
+env = "doom_health_gathering_supreme"
+cfg = parse_vizdoom_cfg(
+ argv=[
+ f"--env={env}",
+ "--num_workers=1",
+ "--save_video",
+ "--no_render",
+ "--max_num_episodes=10",
+ "--experiment=doom_health_gathering_supreme_2222",
+ "--train_dir=train_dir",
+ ],
+ evaluation=True,
+)
+status = enjoy(cfg)
+```
+
+```python
+mp4 = open("/content/train_dir/doom_health_gathering_supreme_2222/replay.mp4", "rb").read()
+data_url = "data:video/mp4;base64," + b64encode(mp4).decode()
+HTML(
+ """
+
+"""
+ % data_url
+)
+```
+
+## Some additional challenges 🏆: Doom Deathmatch
+
+Training an agent to play a Doom deathmatch **takes many hours on a more beefy machine than is available in Colab**.
+
+Fortunately, we have have **already trained an agent in this scenario and it is available in the 🤗 Hub!** Let’s download the model and visualize the agent’s performance.
+
+```python
+# Download the agent from the hub
+python -m sample_factory.huggingface.load_from_hub -r edbeeching/doom_deathmatch_bots_2222 -d ./train_dir
+```
+
+Given the agent plays for a long time the video generation can take **10 minutes**.
+
+```python
+from sample_factory.enjoy import enjoy
+
+register_vizdoom_components()
+env = "doom_deathmatch_bots"
+cfg = parse_vizdoom_cfg(
+ argv=[
+ f"--env={env}",
+ "--num_workers=1",
+ "--save_video",
+ "--no_render",
+ "--max_num_episodes=1",
+ "--experiment=doom_deathmatch_bots_2222",
+ "--train_dir=train_dir",
+ ],
+ evaluation=True,
+)
+status = enjoy(cfg)
+mp4 = open("/content/train_dir/doom_deathmatch_bots_2222/replay.mp4", "rb").read()
+data_url = "data:video/mp4;base64," + b64encode(mp4).decode()
+HTML(
+ """
+
+"""
+ % data_url
+)
+```
+
+
+You **can try to train your agent in this environment** using the code above, but not on colab.
+**Good luck 🤞**
+
+If you prefer an easier scenario, **why not try training in another ViZDoom scenario such as `doom_deadly_corridor` or `doom_defend_the_center`.**
+
+
+
+---
+
+
+This concludes the last unit. But we are not finished yet! 🤗 The following **bonus section include some of the most interesting, advanced and cutting edge work in Deep Reinforcement Learning**.
+
+## Keep learning, stay awesome 🤗