diff --git a/unit7/unit7.ipynb b/unit7/unit7.ipynb
new file mode 100644
index 0000000..7301397
--- /dev/null
+++ b/unit7/unit7.ipynb
@@ -0,0 +1,484 @@
+{
+ "nbformat": 4,
+ "nbformat_minor": 0,
+ "metadata": {
+ "colab": {
+ "name": "unit7.ipynb",
+ "provenance": [],
+ "collapsed_sections": [],
+ "private_outputs": true,
+ "authorship_tag": "ABX9TyNPB+iXGKgIWKts27HKZacW",
+ "include_colab_link": true
+ },
+ "kernelspec": {
+ "name": "python3",
+ "display_name": "Python 3"
+ },
+ "language_info": {
+ "name": "python"
+ },
+ "accelerator": "GPU",
+ "gpuClass": "standard"
+ },
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "view-in-github",
+ "colab_type": "text"
+ },
+ "source": [
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "# Unit 7: Robotics Simulations with PyBullet 🤖\n",
+ "In this small notebook you'll learn to use PyBullet today. And train two agents to walk:\n",
+ "- A bipedal walker 🦿\n",
+ "- A spider (they say Ant but come on... it's a spider 😆) 🕸️\n",
+ "\n",
+ "❓ If you have questions, please post them on #study-group discord channel 👉 https://discord.gg/aYka4Yhff9\n",
+ "\n",
+ "🎮 Environments: \n",
+ "- `Walker2DBulletEnv-v0` 🦿\n",
+ "- `AntBulletEnv-v0` 🕸️\n",
+ "\n",
+ "⬇️ Here is an example of what **you will achieve in just a few minutes.** ⬇️"
+ ],
+ "metadata": {
+ "id": "-PTReiOw-RAN"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "%%html\n",
+ ""
+ ],
+ "metadata": {
+ "id": "QHD2bIF6MrQo"
+ },
+ "execution_count": null,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "%%html\n",
+ ""
+ ],
+ "metadata": {
+ "id": "SvCMOt-vNJ91"
+ },
+ "execution_count": null,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "💡 We advise you to use Google Colab since some environments work only with Ubuntu. The free version of Google Colab is perfect for this tutorial. Let's get started 🚀"
+ ],
+ "metadata": {
+ "id": "XhKgm80b_GNc"
+ }
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "### Install dependencies 🔽\n",
+ "The first step is to install the dependencies, we’ll install multiple ones:\n",
+ "\n",
+ "- `pybullet`: Contains the `Walker2DBullet` and `AntBullet` environment 🚶\n",
+ "- `stable-baselines3[extra]`: The deep reinforcement learning library.\n",
+ "- `huggingface_sb3`: Additional code for Stable-baselines3 to load and upload models from the Hugging Face 🤗 Hub.\n",
+ "- `huggingface_hub`: Library allowing anyone to work with the Hub repositories."
+ ],
+ "metadata": {
+ "id": "e1obkbdJ_KnG"
+ }
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "2yZRi_0bQGPM"
+ },
+ "outputs": [],
+ "source": [
+ "!pip install pybullet\n",
+ "!pip install stable-baselines3[extra]\n",
+ "!pip install huggingface_sb3\n",
+ "!pip install huggingface_hub"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "### Step 2: Import the packages 📦"
+ ],
+ "metadata": {
+ "id": "QTep3PQQABLr"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "import gym\n",
+ "import pybullet_envs\n",
+ "\n",
+ "import os\n",
+ "\n",
+ "from huggingface_sb3 import load_from_hub, package_to_hub\n",
+ "\n",
+ "from stable_baselines3 import PPO\n",
+ "from stable_baselines3.common.evaluation import evaluate_policy\n",
+ "from stable_baselines3.common.vec_env import DummyVecEnv, VecNormalize\n",
+ "from stable_baselines3.common.env_util import make_vec_env\n",
+ "\n",
+ "from huggingface_hub import notebook_login\n",
+ "\n",
+ "import torch \n",
+ "from torch import nn"
+ ],
+ "metadata": {
+ "id": "HpiB8VdnQ7Bk"
+ },
+ "execution_count": null,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "### Step 3: Create the Walker2DBullet 🚶\n",
+ "#### The environment 🎮\n",
+ "In this environment, the agent needs to use correctly its different joints to walk correctly."
+ ],
+ "metadata": {
+ "id": "frVXOrnlBerQ"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "env_id = \"Walker2DBulletEnv-v0\"\n",
+ "# Create the env\n",
+ "env = gym.make(env_id)\n",
+ "\n",
+ "# Get the state space and action space\n",
+ "s_size = env.observation_space.shape[0]\n",
+ "a_size = env.action_space"
+ ],
+ "metadata": {
+ "id": "JpU-JCDQYYax"
+ },
+ "execution_count": null,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "print(\"_____OBSERVATION SPACE_____ \\n\")\n",
+ "print(\"The State Space is: \", s_size)\n",
+ "print(\"Sample observation\", env.observation_space.sample()) # Get a random observation"
+ ],
+ "metadata": {
+ "id": "2ZfvcCqEYgrg"
+ },
+ "execution_count": null,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "print(\"\\n _____ACTION SPACE_____ \\n\")\n",
+ "print(\"The Action Space is: \", a_size)\n",
+ "print(\"Action Space Sample\", env.action_space.sample()) # Take a random action"
+ ],
+ "metadata": {
+ "id": "Tc89eLTYYkK2"
+ },
+ "execution_count": null,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "We need to [normalize input features](https://stable-baselines3.readthedocs.io/en/master/guide/rl_tips.html) For that, a wrapper exists and will compute a running average and standard deviation of input features.\n",
+ "\n",
+ "We also normalize rewards with this same wrapper by adding `norm_reward = True`"
+ ],
+ "metadata": {
+ "id": "1ZyX6qf3Zva9"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "env = make_vec_env(\"Walker2DBulletEnv-v0\", n_envs=16)\n",
+ "\n",
+ "# Adding this wrapper to normalize the observation and the reward\n",
+ "env = VecNormalize(env, norm_obs=True, norm_reward=True, clip_obs=10.)"
+ ],
+ "metadata": {
+ "id": "1RsDtHHAQ9Ie"
+ },
+ "execution_count": null,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "## Step 4: Create the PPO Model 🤖\n",
+ "\n",
+ "PPO is one of the SOTA (state of the art) Deep Reinforcement Learning algorithms. If you don't know how it works, you can check this blogpost and the paper\n",
+ "\n",
+ "In this case, because we have a vector as input, we'll use an MLP (multi-layer perceptron) as policy.\n",
+ "\n",
+ "To find the best parameters I checked the [official trained agents by Stable-Baselines3 team](https://huggingface.co/sb3)."
+ ],
+ "metadata": {
+ "id": "4JmEVU6z1ZA-"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "model = PPO(policy = \"MlpPolicy\",\n",
+ " env = env,\n",
+ " batch_size = 128,\n",
+ " clip_range = 0.4,\n",
+ " ent_coef = 0.0,\n",
+ " gae_lambda = 0.92,\n",
+ " gamma = 0.99,\n",
+ " learning_rate = 3.0e-05,\n",
+ " max_grad_norm = 0.5,\n",
+ " n_epochs = 20,\n",
+ " n_steps = 512,\n",
+ " policy_kwargs = dict(log_std_init=-2, ortho_init=False, activation_fn=nn.ReLU, net_arch=[dict(pi=[256,\n",
+ " 256], vf=[256, 256])] ),\n",
+ " use_sde = True,\n",
+ " sde_sample_freq = 4,\n",
+ " vf_coef = 0.5,\n",
+ " tensorboard_log = \"./tensorboard\",\n",
+ " verbose=1)"
+ ],
+ "metadata": {
+ "id": "vR3T4qFt164I"
+ },
+ "execution_count": null,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "### Step 5: Train the PPO agent 🏃\n",
+ "- Let's train our agent for 2,000,000 timesteps, don't forget to use GPU on Colab. It will take approximately ~25min"
+ ],
+ "metadata": {
+ "id": "opyK3mpJ1-m9"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "model.learn(2_000_000)"
+ ],
+ "metadata": {
+ "id": "4TuGHZD7RF1G"
+ },
+ "execution_count": null,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "# Save the model and VecNormalize statistics when saving the agent\n",
+ "model.save(\"ppo-Walker2DBulletEnv-v0\")\n",
+ "env.save(\"vec_normalize.pkl\")"
+ ],
+ "metadata": {
+ "id": "MfYtjj19cKFr"
+ },
+ "execution_count": null,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "### Step 6: Evaluate the agent 📈\n",
+ "- Now that's our agent is trained, we need to **check its performance**.\n",
+ "- Stable-Baselines3 provides a method to do that `evaluate_policy`\n",
+ "- In this case, we see that's the mean reward is `2371.90 +/- 16.50`"
+ ],
+ "metadata": {
+ "id": "01M9GCd32Ig-"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "from stable_baselines3.common.vec_env import DummyVecEnv, VecNormalize\n",
+ "\n",
+ "# Load the saved statistics\n",
+ "eval_env = DummyVecEnv([lambda: gym.make(\"Walker2DBulletEnv-v0\")])\n",
+ "eval_env = VecNormalize.load(\"vec_normalize.pkl\", eval_env)\n",
+ "\n",
+ "# do not update them at test time\n",
+ "eval_env.training = False\n",
+ "# reward normalization is not needed at test time\n",
+ "eval_env.norm_reward = False\n",
+ "\n",
+ "# Load the agent\n",
+ "model = PPO.load(\"ppo-Walker2DBulletEnv-v0\")\n",
+ "\n",
+ "mean_reward, std_reward = evaluate_policy(model, env)\n",
+ "\n",
+ "print(f\"Mean reward = {mean_reward:.2f} +/- {std_reward:.2f}\")"
+ ],
+ "metadata": {
+ "id": "liirTVoDkHq3"
+ },
+ "execution_count": null,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "### Step 7: Publish our trained model on the Hub 🔥\n",
+ "Now that we saw we got good results after the training, we can publish our trained model on the hub 🤗 with one line of code.\n",
+ "\n",
+ "Here's an example of a Model Card:"
+ ],
+ "metadata": {
+ "id": "44L9LVQaavR8"
+ }
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ ""
+ ],
+ "metadata": {
+ "id": "Ul-eUa-xazBm"
+ }
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "Under the hood, the Hub uses git-based repositories (don't worry if you don't know what git is), which means you can update the model with new versions as you experiment and improve your agent."
+ ],
+ "metadata": {
+ "id": "oJ3YqEgwbd4Y"
+ }
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "By using `package_to_hub` **you evaluate, record a replay, generate a model card of your agent and push it to the hub**.\n",
+ "\n",
+ "This way:\n",
+ "- You can **showcase our work** 🔥\n",
+ "- You can **visualize your agent playing** 👀\n",
+ "- You can **share with the community an agent that others can use** 💾\n",
+ "- You can **access a leaderboard 🏆 to see how well your agent is performing compared to your classmates** 👉 https://huggingface.co/spaces/chrisjay/Deep-Reinforcement-Learning-Leaderboard"
+ ],
+ "metadata": {
+ "id": "MkMk99m8bgaQ"
+ }
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "To be able to share your model with the community there are three more steps to follow:\n",
+ "\n",
+ "1️⃣ (If it's not already done) create an account to HF ➡ https://huggingface.co/join\n",
+ "\n",
+ "2️⃣ Sign in and then, you need to store your authentication token from the Hugging Face website.\n",
+ "- Create a new token (https://huggingface.co/settings/tokens) **with write role**"
+ ],
+ "metadata": {
+ "id": "osyjFCM3bhQv"
+ }
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ ""
+ ],
+ "metadata": {
+ "id": "gXtpU42vbjTa"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "from huggingface_hub import notebook_login\n",
+ "notebook_login()"
+ ],
+ "metadata": {
+ "id": "zHIVtwpnbmU6"
+ },
+ "execution_count": null,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "If you don't want to use a Google Colab or a Jupyter Notebook, you need to use this command instead: `huggingface-cli login`\n",
+ "\n",
+ "3️⃣ We're now ready to push our trained agent to the 🤗 Hub 🔥 using `package_to_hub()` function"
+ ],
+ "metadata": {
+ "id": "BTdZMDfjbkrC"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "package_to_hub(\n",
+ " model=model,\n",
+ " model_name=f\"ppo-{env_id}\",\n",
+ " model_architecture=\"PPO\",\n",
+ " env_id=env_id,\n",
+ " eval_env=eval_env,\n",
+ " repo_id=f\"ThomasSimonini/ppo-{env_id}\",\n",
+ " commit_message=\"Initial commit\",\n",
+ ")"
+ ],
+ "metadata": {
+ "id": "ueuzWVCUTkfS"
+ },
+ "execution_count": null,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "## Some additional challenges 🏆\n",
+ "The best way to learn **is to try things by your own**! Why not trying `AntBulletEnv-v0` or `HalfCheetahBulletEnv-v0`?\n",
+ "\n",
+ "In the [Leaderboard](https://huggingface.co/spaces/chrisjay/Deep-Reinforcement-Learning-Leaderboard) you will find your agents. Can you get to the top?\n",
+ "\n",
+ "Here are some ideas to achieve so:\n",
+ "* Train more steps\n",
+ "* Try different hyperparameters by looking at what your classmates have done 👉 https://huggingface.co/models?other=Walker2DBulletEnv-v0\n",
+ "* **Push your new trained model** on the Hub 🔥\n"
+ ],
+ "metadata": {
+ "id": "G3xy3Nf3c2O1"
+ }
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "See you on Unit 8! 🔥\n",
+ "## Keep learning, stay awesome 🤗"
+ ],
+ "metadata": {
+ "id": "usatLaZ8dM4P"
+ }
+ }
+ ]
+}
\ No newline at end of file