diff --git a/unit2/README.md b/unit2/README.md index e0bdb3f..5cd64dd 100644 --- a/unit2/README.md +++ b/unit2/README.md @@ -4,8 +4,8 @@ In this Unit, we're going to dive deeper into one of the Reinforcement Learning We'll also implement our **first RL agent from scratch**: a Q-Learning agent and will train it in two environments: -- Frozen-Lake-v1 โ›„ (non-slippery version): where our agent will need to go from the starting state (S) to the goal state (G) by walking only on frozen tiles (F) and avoiding holes (H). -- An autonomous taxi ๐Ÿš• will need to learn to navigate a city to transport its passengers from point A to point B. +- [Frozen-Lake-v1 โ›„ (non-slippery version)](https://www.gymlibrary.ml/environments/toy_text/frozen_lake/): where our agent will need to go from the starting state (S) to the goal state (G) by walking only on frozen tiles (F) and avoiding holes (H). +- [An autonomous taxi ๐Ÿš•](https://www.gymlibrary.ml/environments/toy_text/taxi/?highlight=taxi) will need to learn to navigate a city to transport its passengers from point A to point B. unit 2 environments @@ -21,19 +21,19 @@ This course is **self-paced**, you can start whenever you want. ## Required time โฑ๏ธ The required time for this unit is, approximately: -- 2-3 hours for the theory -- 1 hour for the hands-on. +- **2-3 hours** for the theory +- **1 hour** for the hands-on. ## Start this Unit ๐Ÿš€ Here are the steps for this Unit: -1๏ธโƒฃ If it's not already done, sign up to our Discord Server. This is the place where youย **can exchange with the community and with us, create study groups to grow each other and more**ย  +1๏ธโƒฃ ๐Ÿ“ If it's not already done, sign up to our Discord Server. This is the place where youย **can exchange with the community and with us, create study groups to grow each other and more**ย  ๐Ÿ‘‰๐Ÿปย [https://discord.gg/aYka4Yhff9](https://discord.gg/aYka4Yhff9). Are you new to Discord? Check our **discord 101 to get the best practices** ๐Ÿ‘‰ https://github.com/huggingface/deep-rl-class/blob/main/DISCORD.Md -2๏ธโƒฃ **Introduce yourself on Discord in #introduce-yourself Discord channelย ๐Ÿค— and check on the left the Reinforcement Learning section.** +2๏ธโƒฃ ๐Ÿ‘‹ **Introduce yourself on Discord in #introduce-yourself Discord channelย ๐Ÿค— and check on the left the Reinforcement Learning section.** - In #rl-announcements we give the last information about the course. - #discussions is a place to exchange. @@ -56,9 +56,9 @@ Are you new to Discord? Check our **discord 101 to get the best practices** ๐Ÿ‘‰ Thanks to a leaderboard, **you'll be able to compare your results with other classmates** and exchange the best practices to improve your agent's scores Who will win the challenge for Unit 2 ๐Ÿ†? -The hands-on ๐Ÿ‘‰ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/deep-rl-class/blob/main/unit2/unit2.ipynb) +๐Ÿ‘ฉโ€๐Ÿ’ป The hands-on ๐Ÿ‘‰ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/deep-rl-class/blob/main/unit2/unit2.ipynb) -The leaderboard ๐Ÿ‘‰ https://huggingface.co/spaces/chrisjay/Deep-Reinforcement-Learning-Leaderboard +๐Ÿ† The leaderboard ๐Ÿ‘‰ https://huggingface.co/spaces/huggingface-projects/Deep-Reinforcement-Learning-Leaderboard You can work directly **with the colab notebook, which allows you not to have to install everything on your machine (and itโ€™s free)**. @@ -94,4 +94,4 @@ Donโ€™t forget toย **introduce yourself when you sign upย ๐Ÿค—** โ“If you have other questions, [please check our FAQ](https://github.com/huggingface/deep-rl-class#faq) -Keep learning, stay awesome, +## Keep learning, stay awesome ๐Ÿค—,