diff --git a/units/en/unit0/introduction.mdx b/units/en/unit0/introduction.mdx index 08c399c..0d6bebf 100644 --- a/units/en/unit0/introduction.mdx +++ b/units/en/unit0/introduction.mdx @@ -14,7 +14,6 @@ In this unit you’ll: - Learn more **about us**. - **Create your Hugging Face account** (it’s free). - **Sign-up our Discord server**, the place where you can exchange with your classmates and us (the Hugging Face team). -- ``` Let’s get started! @@ -23,8 +22,8 @@ Let’s get started! In this course, you will: - 📖 Study Deep Reinforcement Learning in **theory and practice.** -- 🧑‍💻 Learn to **use famous Deep RL libraries** such as Stable Baselines3, RL Baselines3 Zoo, Sample Factory and CleanRL. -- 🤖 **Train agents in unique environments** such as SnowballFight, Huggy the Doggo 🐶, MineRL (Minecraft ⛏️), VizDoom (Doom) and classical ones such as Space Invaders and PyBullet. +- 🧑‍💻 Learn to **use famous Deep RL libraries** such as [Stable Baselines3](https://stable-baselines3.readthedocs.io/en/master/), [RL Baselines3 Zoo](https://github.com/DLR-RM/rl-baselines3-zoo), [Sample Factory](https://samplefactory.dev/) and [CleanRL](https://github.com/vwxyzjn/cleanrl). +- 🤖 **Train agents in unique environments** such as [SnowballFight](https://huggingface.co/spaces/ThomasSimonini/SnowballFight), [Huggy the Doggo 🐶](https://huggingface.co/spaces/ThomasSimonini/Huggy), [MineRL (Minecraft ⛏️)](https://minerl.io/), [VizDoom (Doom)](https://vizdoom.cs.put.edu.pl/) and classical ones such as [Space Invaders](https://www.gymlibrary.dev/environments/atari/) and [PyBullet](https://pybullet.org/wordpress/). - 💾 Publish your **trained agents with one line of code to the Hub**. But also download powerful agents from the community. - 🏆 Participate in challenges where you will **evaluate your agents against other teams. But also play against AI you'll train.** @@ -43,9 +42,8 @@ Sign up 👉 here The course is composed of: - *A theory part*: where you learn a **concept in theory (article)**. -- *A hands-on*: with a **weekly live hands-on session** in ADD DATE every week at ADD TIME. where you'll learn to use famous Deep RL libraries such as Stable Baselines3, RL Baselines3 Zoo, and RLlib to train your agents in unique environments such as SnowballFight, Huggy the Doggo dog, and classical ones such as Space Invaders and PyBullet. -We strongly advise you to participate in the live sessions so that you can ask questions. But if you can't participate in the live sessions, the sessions are recorded and will be posted to the course Discord server. -- *Challenges* such AI vs. AI and leaderboard. +- *A hands-on*: where you’ll learn **to use famous Deep RL libraries** to train your agents in unique environments. These hands-on will be **Google Colab notebooks but also tutorial videos**. +- *Challenges*: such AI vs. AI and leaderboard. ## Two paths: choose your own adventure [[two-paths]] @@ -68,7 +66,7 @@ To get most of the course, we have some advice: 1. Join or create study groups in Discord : studying in groups is always easier. To do that, you need to join our discord server. 2. **Do the quizzes and assignments**: the best way to learn is to do and test yourself. -3. **Define a schedule to stay in sync: you can use our recommended pace schedule below or create yours.** +3. **Define a schedule to stay in sync**: you can use our recommended pace schedule below or create yours. Course advice @@ -76,9 +74,9 @@ To get most of the course, we have some advice: You need only 3 things: -- A computer with an internet connection. -- Google Colab (free version): most of our hands-on will use Google Colab, the **free version is enough.** -- A Hugging Face Account: to push and load models. If you don’t have an account yet you can create one here (it’s free). +- *A computer* with an internet connection. +- *Google Colab (free version)*: most of our hands-on will use Google Colab, the **free version is enough.** +- A *Hugging Face Account*: to push and load models. If you don’t have an account yet you can create one here (it’s free). Course tools needed @@ -95,11 +93,15 @@ Each chapter in this course is designed **to be completed in 1 week, with approx ## Who are we [[who-are-we]] -About the authors: +About the author: -Thomas Simonini is a Developer Advocate at Hugging Face 🤗 specializing in Deep Reinforcement Learning. He founded Deep Reinforcement Learning Course in 2018, which became one of the most used courses in Deep RL. +- Thomas Simonini is a Developer Advocate at Hugging Face 🤗 specializing in Deep Reinforcement Learning. He founded Deep Reinforcement Learning Course in 2018, which became one of the most used courses in Deep RL. + +About the reviewers: + +- Omar Sanseviero is a Machine Learning engineer at Hugging Face where he works in the intersection of ML, Community and Open Source. Previously, Omar worked as a Software Engineer at Google in the teams of Assistant and TensorFlow Graphics. He is from Peru and likes llamas 🦙. +- Sayak Paul is a Developer Advocate Engineer at Hugging Face. He's interested in the area of representation learning (self-supervision, semi-supervision, model robustness). And he loves watching crime and action thrillers 🔪. -ADD OMAR ## When do the challenges start? [[challenges]] diff --git a/units/en/unit0/setup.mdx b/units/en/unit0/setup.mdx index 0ad292c..4288c48 100644 --- a/units/en/unit0/setup.mdx +++ b/units/en/unit0/setup.mdx @@ -2,18 +2,18 @@ After all this information, it's time to get started. We're going to do two things: -1. Create your Hugging Face account if it's not already done -2. Sign up to Discord and introduce yourself (don't be shy 🤗) +1. **Create your Hugging Face account** if it's not already done +2. **Sign up to Discord and introduce yourself** (don't be shy 🤗) ### Let's create my Hugging Face account -(If it's not already done) create an account to HF here +(If it's not already done) create an account to HF here ### Let's join our Discord server You can now sign up for our Discord Server. This is the place where you **can exchange with the community and with us, create and join study groups to grow each other and more** -👉🏻 Join our discord server here. +👉🏻 Join our discord server here. When you join, remember to introduce yourself in #introduce-yourself and sign-up for reinforcement channels in #role-assignments. diff --git a/units/en/unit1/introduction.mdx b/units/en/unit1/introduction.mdx index 7f9860e..bcdc0ea 100644 --- a/units/en/unit1/introduction.mdx +++ b/units/en/unit1/introduction.mdx @@ -1,7 +1,6 @@ # Introduction to Deep Reinforcement Learning [[introduction-to-deep-reinforcement-learning]] - -TODO: ADD IMAGE THUMBNAIL +Unit 1 thumbnail Welcome to the most fascinating topic in Artificial Intelligence: **Deep Reinforcement Learning.** @@ -10,17 +9,12 @@ Deep RL is a type of Machine Learning where an agent learns **how to behave**  So in this first chapter, **you'll learn the foundations of Deep Reinforcement Learning.** -Then, you'll **train your first two Deep Reinforcement Learning agents** using Stable-Baselines3 a Deep Reinforcement Learning library.: +Then, you'll **train your Deep Reinforcement Learning agent, a lunar lander to land correctly on the Moon** using Stable-Baselines3 a Deep Reinforcement Learning library. -1. A Lunar Lander agent that will learn to **land correctly on the Moon 🌕** -2. A car that needs **to reach the top of the mountain ⛰️ **. - -TODO: Add illustration MountainCar and MoonLanding +LunarLander And finally, you'll **upload it to the Hugging Face Hub 🤗, a free, open platform where people can share ML models, datasets, and demos.** -TODO: ADD model card illustration - It's essential **to master these elements** before diving into implementing Deep Reinforcement Learning agents. The goal of this chapter is to give you solid foundations. So let's get started! 🚀