diff --git a/README.md b/README.md index 86b923d..85d2cf5 100644 --- a/README.md +++ b/README.md @@ -28,11 +28,12 @@ This course is **self-paced** you can start when you want πŸ₯³. | [Published πŸ₯³](https://github.com/huggingface/deep-rl-class/tree/main/unit3#unit-3-deep-q-learning-with-atari-games-) | [Deep Q-Learning](https://github.com/huggingface/deep-rl-class/tree/main/unit3#unit-3-deep-q-learning-with-atari-games-) | Train a Deep Q-Learning agent to play Space Invaders using [RL-Baselines3-Zoo](https://github.com/DLR-RM/rl-baselines3-zoo) | | [Published πŸ₯³](https://github.com/huggingface/deep-rl-class/blob/main/unit3/bonus.md)| [Bonus: Automatic Hyperparameter Tuning using Optuna](https://github.com/huggingface/deep-rl-class/blob/main/unit3/bonus.md)| | | | | [Published πŸ₯³](https://medium.com/@thomassimonini/an-introduction-to-unity-ml-agents-with-hugging-face-efbac62c8c80) | [🎁 Learn to train your first Unity MLAgent](https://medium.com/@thomassimonini/an-introduction-to-unity-ml-agents-with-hugging-face-efbac62c8c80) | | -| June the 30th | Policy-based methods | πŸ—οΈ | -| July the 7th | Actor-Critic Methods | πŸ—οΈ | -| July the 14th | Proximal Policy Optimization (PPO) | πŸ—οΈ | -| July the 21th | Decision Transformers and offline Reinforcement Learning | πŸ—οΈ | -| July the 28th | Towards better explorations methods | πŸ—οΈ | +| [Published πŸ₯³](https://github.com/huggingface/deep-rl-class/tree/main/unit5#unit-5-policy-gradient-with-pytorch) | [Policy Gradient with PyTorch](https://huggingface.co/blog/deep-rl-pg) | [Code a Reinforce agent from scratch using PyTorch and train it to play Pong 🎾, CartPole and Pixelcopter 🚁](https://colab.research.google.com/github/huggingface/deep-rl-class/blob/main/unit5/unit5.ipynb) | +| July the 7th | 🎁 A new library integration | πŸ—οΈ | +| July the 14th | Actor-Critic Methods | πŸ—οΈ | +| July the 21th | Proximal Policy Optimization (PPO) | πŸ—οΈ | +| July the 28th | Decision Transformers and offline Reinforcement Learning | πŸ—οΈ | +| August the 5th | Towards better explorations methods | πŸ—οΈ | ## The library you'll learn during this course diff --git a/unit5/README.md b/unit5/README.md new file mode 100644 index 0000000..419b0df --- /dev/null +++ b/unit5/README.md @@ -0,0 +1,69 @@ +# Unit 5: Policy Gradient with PyTorch + +In this Unit, **we'll study Policy Gradient Methods**. + +And we'll **implement Reinforce (a policy gradient method) from scratch using PyTorch**. Before testing its robustness using CartPole-v1, PixelCopter, and Pong. + +unit 5 environments + +You'll then be able to **compare your agent’s results with other classmates thanks to a leaderboard** πŸ”₯ πŸ‘‰ https://huggingface.co/spaces/chrisjay/Deep-Reinforcement-Learning-Leaderboard + +This course is **self-paced**, you can start whenever you want. + +## Required time ⏱️ +The required time for this unit is, approximately: +- 1 hour for the theory +- 1-2 hours for the hands-on. + +## Start this Unit πŸš€ +Here are the steps for this Unit: + +1️⃣ πŸ“– **Read [Policy Gradient with PyTorch Chapter](https://huggingface.co/blog/deep-rl-pg)**. + +2️⃣ πŸ‘©β€πŸ’» Then dive on the hands-on where you'll **code your first Deep Reinforcement Learning algorithm from scratch: Reinforce**. + +Reinforce is a *Policy-Based Method*: a Deep Reinforcement Learning algorithm that tries **to optimize the policy directly without using an action-value function**. +More precisely, Reinforce is a *Policy-Gradient Method*, a subclass of *Policy-Based Methods* that aims **to optimize the policy directly by estimating the weights of the optimal policy using Gradient Ascent**. + +To test its robustness, we're going to train it in 3 different simple environments: +- Cartpole-v1 +- PongEnv +- PixelcopterEnv + +Thanks to a leaderboard, **you'll be able to compare your results with other classmates** and exchange the best practices to improve your agent's scores Who will win the challenge for Unit 5 πŸ†? + +The hands-on πŸ‘‰ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/deep-rl-class/blob/main/unit5/unit5.ipynb) + +The leaderboard πŸ‘‰ https://huggingface.co/spaces/chrisjay/Deep-Reinforcement-Learning-Leaderboard + +You can work directly **with the colab notebook, which allows you not to have to install everything on your machine (and it’s free)**. + + +## Additional readings πŸ“š +- [Foundations of Deep RL Series, L3 Policy Gradients and Advantage Estimation by Pieter Abbeel](https://youtu.be/AKbX1Zvo7r8) +- [Policy Gradient Algorithms](https://lilianweng.github.io/posts/2018-04-08-policy-gradient/) + +## How to make the most of this course + +To make the most of the course, my advice is to: + +- **Participate in Discord** and join a study group. +- **Read multiple times** the theory part and takes some notes +- Don’t just do the colab. When you learn something, try to change the environment, change the parameters and read the libraries' documentation. Have fun πŸ₯³ +- Struggling is **a good thing in learning**. It means that you start to build new skills. Deep RL is a complex topic and it takes time to understand. Try different approaches, use our additional readings, and exchange with classmates on discord. + +## This is a course built with you πŸ‘·πŸΏβ€β™€οΈ + +We want to improve and update the course iteratively with your feedback. **If you have some, please fill this form** πŸ‘‰ https://forms.gle/3HgA7bEHwAmmLfwh9 + +## Don’t forget to join the Community πŸ“’ + +We have a discord server where youΒ **can exchange with the community and with us, create study groups to grow each other and more**Β  + +πŸ‘‰πŸ»Β [https://discord.gg/aYka4Yhff9](https://discord.gg/aYka4Yhff9). + +Don’t forget toΒ **introduce yourself when you sign upΒ πŸ€—** + +❓If you have other questions, [please check our FAQ](https://github.com/huggingface/deep-rl-class#faq) + +### Keep learning, stay awesome, diff --git a/unit5/assets/img/envs.gif b/unit5/assets/img/envs.gif new file mode 100644 index 0000000..96f09eb Binary files /dev/null and b/unit5/assets/img/envs.gif differ