Merge pull request #45 from huggingface/quiz/unit2-part1

Add quiz Unit 2 Part 1
This commit is contained in:
Thomas Simonini
2022-05-31 19:04:40 +02:00
committed by GitHub
11 changed files with 104 additions and 3 deletions

View File

@@ -44,9 +44,11 @@ Are you new to Discord? Check our **discord 101 to get the best practices** 👉
3⃣ 📖 **Read An [Introduction to Q-Learning Part 1](https://huggingface.co/blog/deep-rl-q-part1)**.
4📖 **Read An [Introduction to Q-Learning Part 2](https://huggingface.co/blog/deep-rl-q-part2)**.
4📝 Take a piece of paper and **check your knowledge with this series of questions** ❔ 👉 https://github.com/huggingface/deep-rl-class/blob/main/unit2/quiz1.md
5👩‍💻 Then dive on the hands-on, where **youll implement our first RL agent from scratch**, a Q-Learning agent, and will train it in two environments:
5📖 **Read An [Introduction to Q-Learning Part 2](https://huggingface.co/blog/deep-rl-q-part2)**.
6⃣ 👩‍💻 Then dive on the hands-on, where **youll implement our first RL agent from scratch**, a Q-Learning agent, and will train it in two environments:
1. Frozen Lake v1 ❄️: where our agent will need to **go from the starting state (S) to the goal state (G)** by walking only on frozen tiles (F) and avoiding holes (H).
2. An autonomous taxi 🚕: where the agent will need **to learn to navigate** a city to **transport its passengers from point A to point B.**
@@ -58,7 +60,7 @@ The leaderboard 👉 https://huggingface.co/spaces/chrisjay/Deep-Reinforcement-L
You can work directly **with the colab notebook, which allows you not to have to install everything on your machine (and its free)**.
6️⃣ The best way to learn **is to try things on your own**. Thats why we have a challenges section in the colab where we give you some ideas on how you can go further: using another environment, using another model etc.
7️⃣ The best way to learn **is to try things on your own**. Thats why we have a challenges section in the colab where we give you some ideas on how you can go further: using another environment, using another model etc.
## Additional readings 📚
- [Reinforcement Learning: An Introduction, Richard Sutton and Andrew G. Barto Chapter 5, 6 and 7](http://incompleteideas.net/book/RLbook2020.pdf)

BIN
unit2/assets/img/MC-3.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 144 KiB

BIN
unit2/assets/img/TD-1.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 220 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 108 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 324 KiB

BIN
unit2/assets/img/mc-ex.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 131 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 270 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 244 KiB

BIN
unit2/assets/img/td-ex.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 119 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 445 KiB

99
unit2/quiz1.md Normal file
View File

@@ -0,0 +1,99 @@
# Knowledge Check ✔️
The best way to learn and [avoid the illusion of competence](https://fr.coursera.org/lecture/learning-how-to-learn/illusions-of-competence-BuFzf) **is to test yourself.** This will help you to find **where you need to reinforce your knowledge**.
📝 Take a piece of paper and try to answer by writing, **then check the solutions**.
### Q1: What are the two main approaches to find optimal policy?
<details>
<summary>Solution</summary>
The two main approaches are:
- *Policy-based methods*: **Train the policy directly** to learn which action to take given a state.
- *Value-based methods* : Train a value function to **learn which state is more valuable and use this value function to take the action that leads to it**.
<img src="assets/img/two-approaches.jpg" alt="Two approaches of Deep RL"/>
📖 If you don't remember, check 👉 https://huggingface.co/blog/deep-rl-q-part1#what-is-rl-a-short-recap
</details>
### Q2: What is the Bellman Equation?
<details>
<summary>Solution</summary>
**The Bellman equation is a recursive equation** that works like this: instead of starting for each state from the beginning and calculating the return, we can consider the value of any state as:
$R_{t+1} + ( gamma * V(S_{t+1}))$
The immediate reward + the discounted value of the state that follows
📖 If you don't remember, check 👉 https://huggingface.co/blog/deep-rl-q-part1#the-bellman-equation-simplify-our-value-estimation
</details>
### Q3: Define each part of the Bellman Equation
<img src="assets/img/bellman4-quiz.jpg" alt="Bellman equation quiz"/>
<details>
<summary>Solution</summary>
<img src="assets/img/bellman4.jpg" alt="Bellman equation solution"/>
📖 If you don't remember, check 👉 https://huggingface.co/blog/deep-rl-q-part1#the-bellman-equation-simplify-our-value-estimation
</details>
### Q4: What is the difference between Monte Carlo and Temporal Difference learning methods?
<details>
<summary>Solution</summary>
There are two types of methods to learn a policy or a value function:
- With the *Monte Carlo method*, we update the value function **from a complete episode**, and so we use the actual accurate discounted return of this episode.
- With the *TD Learning method*, we update the value function **from a step, so we replace Gt that we don't have with an estimated return called TD target**.
<img src="assets/img/summary-learning-mtds.jpg" alt="summary-learning-mtds"/>
📖 If you don't remember, check 👉 https://huggingface.co/blog/deep-rl-q-part1#monte-carlo-vs-temporal-difference-learning
</details>
### Q5: Define each part of Temporal Difference learning formula
<img src="assets/img/td-ex.jpg" alt="TD Learning exercise"/>
<details>
<summary>Solution</summary>
<img src="assets/img/TD-1.jpg" alt="TD Exercise"/>
📖 If you don't remember, check 👉 https://huggingface.co/blog/deep-rl-q-part1#temporal-difference-learning-learning-at-each-step
</details>
### Q6: Define each part of Monte Carlo learning formula
<img src="assets/img/mc-ex.jpg" alt="MC Learning exercise"/>
<details>
<summary>Solution</summary>
<img src="assets/img/monte-carlo-approach.jpg" alt="MC Exercise"/>
📖 If you don't remember, check 👉 https://huggingface.co/blog/deep-rl-q-part1#monte-carlo-learning-at-the-end-of-the-episode
</details>
---
Congrats on **finishing this Quiz** 🥳, if you missed some elements, take time to [read again the chapter](https://huggingface.co/blog/deep-rl-q-part1) to reinforce (😏) your knowledge.
**Keep Learning, Stay Awesome**