Merge pull request #67 from huggingface/unit7

Unit7: A2C
This commit is contained in:
Thomas Simonini
2022-07-22 10:44:22 +02:00
committed by GitHub
2 changed files with 16 additions and 9 deletions

View File

@@ -30,9 +30,8 @@ This course is **self-paced** you can start when you want 🥳.
| [Published 🥳](https://github.com/huggingface/deep-rl-class/tree/main/unit4#unit-4-an-introduction-to-unity-mlagents-with-hugging-face-) | [🎁 Learn to train your first Unity MLAgent](https://github.com/huggingface/deep-rl-class/tree/main/unit4#unit-4-an-introduction-to-unity-mlagents-with-hugging-face-) | [Train a curious agent to destroy Pyramids 💥](https://colab.research.google.com/github/huggingface/deep-rl-class/blob/main/unit4/unit4.ipynb) |
| [Published 🥳](https://github.com/huggingface/deep-rl-class/tree/main/unit5#unit-5-policy-gradient-with-pytorch) | [Policy Gradient with PyTorch](https://huggingface.co/blog/deep-rl-pg) | [Code a Reinforce agent from scratch using PyTorch and train it to play Pong 🎾, CartPole and Pixelcopter 🚁](https://colab.research.google.com/github/huggingface/deep-rl-class/blob/main/unit5/unit5.ipynb) |
| [Published 🥳](https://github.com/huggingface/deep-rl-class/tree/main/unit6#towards-better-explorations-methods-with-curiosity) | [Towards better explorations methods with Curiosity](https://github.com/huggingface/deep-rl-class/tree/main/unit6#towards-better-explorations-methods-with-curiosity)| |
| [Published 🥳](https://github.com/huggingface/deep-rl-class/tree/main/unit7#unit-7-robotics-simulations-with-pybullet-) | [Bonus: Robotics Simulations with PyBullet 🤖](https://github.com/huggingface/deep-rl-class/tree/main/unit7#unit-7-robotics-simulations-with-pybullet-)| [Train a bipedal walker and a spider to learn to walk](https://colab.research.google.com/github/huggingface/deep-rl-class/blob/main/unit7/unit7.ipynb) |
| July the 22th | Actor-Critic Methods | 🏗️ |
| July the 29th | Proximal Policy Optimization (PPO) | 🏗️ |
| [Published 🥳]() | Advantage Actor Critic (A2C) | [Train a bipedal walker and a spider to learn to walk using A2C]() |
| August the 5th | Proximal Policy Optimization (PPO) | 🏗️ |
| August | Decision Transformers and offline Reinforcement Learning | 🏗️ |

View File

@@ -1,33 +1,41 @@
# Unit 7: Robotics Simulations with PyBullet 🤖
# Unit 7: Advantage Actor Critic (A2C) using Robotics Simulations with PyBullet 🤖
One of the major industries that use Reinforcement Learning is robotics. Unfortunately, **having access to robot equipment is very expensive**. Fortunately, some simulations exist to train Robots:
1. PyBullet
2. MuJoco
3. Unity Simulations
We're going to use PyBullet today. And train two agents to walk:
- A bipedal walker 🦿
- A spider 🕸️
We're going to learn about Advantage Actor Critic (A2C) and how to use PyBullet. And train a spider agents to walk.
🏆 You'll then be able to **compare your agents results with other classmates thanks to a leaderboard** 🔥 👉 https://huggingface.co/spaces/chrisjay/Deep-Reinforcement-Learning-Leaderboard
![cover](https://github.com/huggingface/deep-rl-class/blob/main/unit7/assets/img/pybullet-envs.gif?raw=true)
We'll learn to use PyBullet environments and why we normalize input features.
Let's get started 🥳
## Required time ⏱️
The required time for this unit is, approximately:
- 1 hour for the theory.
- 1 hour for the hands-on.
## Start this Unit 🚀
Here are the steps for this Unit:
1⃣ 📖 [Read Advantage Actor Critic Chapter](https://huggingface.co/blog/deep-rl-a2c).
2⃣ 👩‍💻 Then dive on the hands-on where you'll train two robots to walk.
The hands-on 👉 [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/deep-rl-class/blob/main/unit7/unit7.ipynb)
Thanks to a leaderboard, you'll be able to compare your results with other classmates and exchange the best practices to improve your agent's scores Who will win the challenge for Unit 7 🏆?
The leaderboard 👉 https://huggingface.co/spaces/chrisjay/Deep-Reinforcement-Learning-Leaderboard
## Additional readings 📚
- [Making Sense of the Bias / Variance Trade-off in (Deep) Reinforcement Learning](https://blog.mlreview.com/making-sense-of-the-bias-variance-trade-off-in-deep-reinforcement-learning-79cf1e83d565)
- [Bias-variance Tradeoff in Reinforcement Learning](https://www.endtoend.ai/blog/bias-variance-tradeoff-in-reinforcement-learning/)
- [Foundations of Deep RL Series, L3 Policy Gradients and Advantage Estimation by Pieter Abbeel](https://youtu.be/AKbX1Zvo7r8)
## How to make the most of this course
To make the most of the course, my advice is to: