From 6b59ea7e83e5ac2cd669a9a7525045a07f7caa9a Mon Sep 17 00:00:00 2001 From: Thomas Simonini Date: Wed, 20 Jul 2022 17:30:31 +0200 Subject: [PATCH 1/5] Add unit 7 A2C --- README.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/README.md b/README.md index 74b6fa6..4c7878b 100644 --- a/README.md +++ b/README.md @@ -30,8 +30,7 @@ This course is **self-paced** you can start when you want πŸ₯³. | [Published πŸ₯³](https://github.com/huggingface/deep-rl-class/tree/main/unit4#unit-4-an-introduction-to-unity-mlagents-with-hugging-face-) | [🎁 Learn to train your first Unity MLAgent](https://github.com/huggingface/deep-rl-class/tree/main/unit4#unit-4-an-introduction-to-unity-mlagents-with-hugging-face-) | [Train a curious agent to destroy Pyramids πŸ’₯](https://colab.research.google.com/github/huggingface/deep-rl-class/blob/main/unit4/unit4.ipynb) | | [Published πŸ₯³](https://github.com/huggingface/deep-rl-class/tree/main/unit5#unit-5-policy-gradient-with-pytorch) | [Policy Gradient with PyTorch](https://huggingface.co/blog/deep-rl-pg) | [Code a Reinforce agent from scratch using PyTorch and train it to play Pong 🎾, CartPole and Pixelcopter 🚁](https://colab.research.google.com/github/huggingface/deep-rl-class/blob/main/unit5/unit5.ipynb) | | [Published πŸ₯³](https://github.com/huggingface/deep-rl-class/tree/main/unit6#towards-better-explorations-methods-with-curiosity) | [Towards better explorations methods with Curiosity](https://github.com/huggingface/deep-rl-class/tree/main/unit6#towards-better-explorations-methods-with-curiosity)| | -| [Published πŸ₯³](https://github.com/huggingface/deep-rl-class/tree/main/unit7#unit-7-robotics-simulations-with-pybullet-) | [Bonus: Robotics Simulations with PyBullet πŸ€–](https://github.com/huggingface/deep-rl-class/tree/main/unit7#unit-7-robotics-simulations-with-pybullet-)| [Train a bipedal walker and a spider to learn to walk](https://colab.research.google.com/github/huggingface/deep-rl-class/blob/main/unit7/unit7.ipynb) | -| July the 22th | Actor-Critic Methods | πŸ—οΈ | +| [Published πŸ₯³]() | Advantage Actor Critic (A2C) | [Train a bipedal walker and a spider to learn to walk using A2C]() | | July the 29th | Proximal Policy Optimization (PPO) | πŸ—οΈ | | August | Decision Transformers and offline Reinforcement Learning | πŸ—οΈ | From 7c7e1f7de775d81e206161fd02d1c6d780952ddd Mon Sep 17 00:00:00 2001 From: Thomas Simonini Date: Thu, 21 Jul 2022 07:29:45 +0200 Subject: [PATCH 2/5] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 4c7878b..4bd185d 100644 --- a/README.md +++ b/README.md @@ -31,7 +31,7 @@ This course is **self-paced** you can start when you want πŸ₯³. | [Published πŸ₯³](https://github.com/huggingface/deep-rl-class/tree/main/unit5#unit-5-policy-gradient-with-pytorch) | [Policy Gradient with PyTorch](https://huggingface.co/blog/deep-rl-pg) | [Code a Reinforce agent from scratch using PyTorch and train it to play Pong 🎾, CartPole and Pixelcopter 🚁](https://colab.research.google.com/github/huggingface/deep-rl-class/blob/main/unit5/unit5.ipynb) | | [Published πŸ₯³](https://github.com/huggingface/deep-rl-class/tree/main/unit6#towards-better-explorations-methods-with-curiosity) | [Towards better explorations methods with Curiosity](https://github.com/huggingface/deep-rl-class/tree/main/unit6#towards-better-explorations-methods-with-curiosity)| | | [Published πŸ₯³]() | Advantage Actor Critic (A2C) | [Train a bipedal walker and a spider to learn to walk using A2C]() | -| July the 29th | Proximal Policy Optimization (PPO) | πŸ—οΈ | +| August the 5th | Proximal Policy Optimization (PPO) | πŸ—οΈ | | August | Decision Transformers and offline Reinforcement Learning | πŸ—οΈ | From f4e507ea28f3a8b3eb88c5d2290f465c63355e17 Mon Sep 17 00:00:00 2001 From: Thomas Simonini Date: Thu, 21 Jul 2022 15:53:25 +0200 Subject: [PATCH 3/5] Update README.md --- unit7/README.md | 18 ++++++++++++++---- 1 file changed, 14 insertions(+), 4 deletions(-) diff --git a/unit7/README.md b/unit7/README.md index 3f8ba26..1cd76da 100644 --- a/unit7/README.md +++ b/unit7/README.md @@ -1,11 +1,11 @@ -# Unit 7: Robotics Simulations with PyBullet πŸ€– +# Unit 7: Advantage Actor Critic (A2C) using Robotics Simulations with PyBullet πŸ€– One of the major industries that use Reinforcement Learning is robotics. Unfortunately, **having access to robot equipment is very expensive**. Fortunately, some simulations exist to train Robots: 1. PyBullet 2. MuJoco 3. Unity Simulations -We're going to use PyBullet today. And train two agents to walk: +We're going to learn about Advantage Actor Critic (A2C) and how to use PyBullet. And train two agents to walk: - A bipedal walker 🦿 - A spider πŸ•ΈοΈ @@ -13,21 +13,31 @@ We're going to use PyBullet today. And train two agents to walk: ![cover](https://github.com/huggingface/deep-rl-class/blob/main/unit7/assets/img/pybullet-envs.gif?raw=true) -We'll learn to use PyBullet environments and why we normalize input features. - Let's get started πŸ₯³ ## Required time ⏱️ The required time for this unit is, approximately: +- 1 hour for the theory. - 1 hour for the hands-on. ## Start this Unit πŸš€ Here are the steps for this Unit: +1️⃣ πŸ“– [Read Advantage Actor Critic Chapter](https://huggingface.co/blog/deep-rl-a2c). + +2️⃣ πŸ‘©β€πŸ’» Then dive on the hands-on where you'll train two robots to walk. + The hands-on πŸ‘‰ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/deep-rl-class/blob/main/unit7/unit7.ipynb) +Thanks to a leaderboard, you'll be able to compare your results with other classmates and exchange the best practices to improve your agent's scores Who will win the challenge for Unit 7 πŸ†? + The leaderboard πŸ‘‰ https://huggingface.co/spaces/chrisjay/Deep-Reinforcement-Learning-Leaderboard +## Additional readings πŸ“š +- [Making Sense of the Bias / Variance Trade-off in (Deep) Reinforcement Learning](https://blog.mlreview.com/making-sense-of-the-bias-variance-trade-off-in-deep-reinforcement-learning-79cf1e83d565) +- [Bias-variance Tradeoff in Reinforcement Learning](https://www.endtoend.ai/blog/bias-variance-tradeoff-in-reinforcement-learning/) +- [Foundations of Deep RL Series, L3 Policy Gradients and Advantage Estimation by Pieter Abbeel](Foundations of Deep RL Series, L3 Policy Gradients and Advantage Estimation by Pieter Abbeel) + ## How to make the most of this course To make the most of the course, my advice is to: From 24de9ccf8d0dec657a97242b3ef9fbb14dc395cd Mon Sep 17 00:00:00 2001 From: Thomas Simonini Date: Thu, 21 Jul 2022 15:53:55 +0200 Subject: [PATCH 4/5] Update README.md --- unit7/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/unit7/README.md b/unit7/README.md index 1cd76da..5b0f01e 100644 --- a/unit7/README.md +++ b/unit7/README.md @@ -36,7 +36,7 @@ The leaderboard πŸ‘‰ https://huggingface.co/spaces/chrisjay/Deep-Reinforcement-L ## Additional readings πŸ“š - [Making Sense of the Bias / Variance Trade-off in (Deep) Reinforcement Learning](https://blog.mlreview.com/making-sense-of-the-bias-variance-trade-off-in-deep-reinforcement-learning-79cf1e83d565) - [Bias-variance Tradeoff in Reinforcement Learning](https://www.endtoend.ai/blog/bias-variance-tradeoff-in-reinforcement-learning/) -- [Foundations of Deep RL Series, L3 Policy Gradients and Advantage Estimation by Pieter Abbeel](Foundations of Deep RL Series, L3 Policy Gradients and Advantage Estimation by Pieter Abbeel) +- [Foundations of Deep RL Series, L3 Policy Gradients and Advantage Estimation by Pieter Abbeel](https://youtu.be/AKbX1Zvo7r8) ## How to make the most of this course From 5859c6d2f810005ee9424191dd581805e02f7380 Mon Sep 17 00:00:00 2001 From: Thomas Simonini Date: Thu, 21 Jul 2022 16:26:57 +0200 Subject: [PATCH 5/5] Update README.md --- unit7/README.md | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/unit7/README.md b/unit7/README.md index 5b0f01e..4796800 100644 --- a/unit7/README.md +++ b/unit7/README.md @@ -5,9 +5,7 @@ One of the major industries that use Reinforcement Learning is robotics. Unfortu 2. MuJoco 3. Unity Simulations -We're going to learn about Advantage Actor Critic (A2C) and how to use PyBullet. And train two agents to walk: -- A bipedal walker 🦿 -- A spider πŸ•ΈοΈ +We're going to learn about Advantage Actor Critic (A2C) and how to use PyBullet. And train a spider agents to walk. πŸ† You'll then be able to **compare your agent’s results with other classmates thanks to a leaderboard** πŸ”₯ πŸ‘‰ https://huggingface.co/spaces/chrisjay/Deep-Reinforcement-Learning-Leaderboard