# Unit 7: Advantage Actor Critic (A2C) using Robotics Simulations with PyBullet ๐Ÿค– One of the major industries that use Reinforcement Learning is robotics. Unfortunately, **having access to robot equipment is very expensive**. Fortunately, some simulations exist to train Robots: 1. PyBullet 2. MuJoco 3. Unity Simulations We're going to learn about Advantage Actor Critic (A2C) and how to use PyBullet. And train two agents to walk: - A bipedal walker ๐Ÿฆฟ - A spider ๐Ÿ•ธ๏ธ ๐Ÿ† You'll then be able to **compare your agentโ€™s results with other classmates thanks to a leaderboard** ๐Ÿ”ฅ ๐Ÿ‘‰ https://huggingface.co/spaces/chrisjay/Deep-Reinforcement-Learning-Leaderboard ![cover](https://github.com/huggingface/deep-rl-class/blob/main/unit7/assets/img/pybullet-envs.gif?raw=true) Let's get started ๐Ÿฅณ ## Required time โฑ๏ธ The required time for this unit is, approximately: - 1 hour for the theory. - 1 hour for the hands-on. ## Start this Unit ๐Ÿš€ Here are the steps for this Unit: 1๏ธโƒฃ ๐Ÿ“– [Read Advantage Actor Critic Chapter](https://huggingface.co/blog/deep-rl-a2c). 2๏ธโƒฃ ๐Ÿ‘ฉโ€๐Ÿ’ป Then dive on the hands-on where you'll train two robots to walk. The hands-on ๐Ÿ‘‰ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/deep-rl-class/blob/main/unit7/unit7.ipynb) Thanks to a leaderboard, you'll be able to compare your results with other classmates and exchange the best practices to improve your agent's scores Who will win the challenge for Unit 7 ๐Ÿ†? The leaderboard ๐Ÿ‘‰ https://huggingface.co/spaces/chrisjay/Deep-Reinforcement-Learning-Leaderboard ## Additional readings ๐Ÿ“š - [Making Sense of the Bias / Variance Trade-off in (Deep) Reinforcement Learning](https://blog.mlreview.com/making-sense-of-the-bias-variance-trade-off-in-deep-reinforcement-learning-79cf1e83d565) - [Bias-variance Tradeoff in Reinforcement Learning](https://www.endtoend.ai/blog/bias-variance-tradeoff-in-reinforcement-learning/) - [Foundations of Deep RL Series, L3 Policy Gradients and Advantage Estimation by Pieter Abbeel](Foundations of Deep RL Series, L3 Policy Gradients and Advantage Estimation by Pieter Abbeel) ## How to make the most of this course To make the most of the course, my advice is to: - **Participate in Discord** and join a study group. - **Read multiple times** the theory part and takes some notes - Donโ€™t just do the colab. When you learn something, try to change the environment, change the parameters and read the libraries' documentation. Have fun ๐Ÿฅณ - Struggling is **a good thing in learning**. It means that you start to build new skills. Deep RL is a complex topic and it takes time to understand. Try different approaches, use our additional readings, and exchange with classmates on discord. ## This is a course built with you ๐Ÿ‘ท๐Ÿฟโ€โ™€๏ธ We want to improve and update the course iteratively with your feedback. **If you have some, please fill this form** ๐Ÿ‘‰ https://forms.gle/3HgA7bEHwAmmLfwh9 ## Donโ€™t forget to join the Community ๐Ÿ“ข We have a discord server where youย **can exchange with the community and with us, create study groups to grow each other and more**ย  ๐Ÿ‘‰๐Ÿปย [https://discord.gg/aYka4Yhff9](https://discord.gg/aYka4Yhff9). Donโ€™t forget toย **introduce yourself when you sign upย ๐Ÿค—** โ“If you have other questions, [please check our FAQ](https://github.com/huggingface/deep-rl-class#faq) ### Keep learning, stay awesome ๐Ÿค—