mirror of
https://github.com/huggingface/deep-rl-class.git
synced 2026-04-13 18:00:45 +08:00
Merge pull request #151 from huggingface/ThomasSimonini/SmallUpdates
Small updates typos and others
This commit is contained in:
@@ -11,3 +11,4 @@ These are **optional readings** if you want to go deeper.
|
||||
## Gym [[gym]]
|
||||
|
||||
- [Getting Started With OpenAI Gym: The Basic Building Blocks](https://blog.paperspace.com/getting-started-with-openai-gym/)
|
||||
- [Make your own Gym custom environment](https://www.gymlibrary.dev/content/environment_creation/)
|
||||
|
||||
@@ -34,6 +34,7 @@ You can either do this hands-on by reading the notebook or following it with the
|
||||
|
||||
<Youtube id="CsuIANBnSq8" />
|
||||
|
||||
|
||||
# Unit 1: Train your first Deep Reinforcement Learning Agent 🤖
|
||||
|
||||
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit1/thumbnail.jpg" alt="Unit 1 thumbnail" width="100%">
|
||||
@@ -42,9 +43,6 @@ In this notebook, you'll train your **first Deep Reinforcement Learning agent**
|
||||
|
||||
⬇️ Here is an example of what **you will achieve in just a couple of minutes.** ⬇️
|
||||
|
||||
|
||||
|
||||
|
||||
```python
|
||||
%%html
|
||||
<video controls autoplay><source src="https://huggingface.co/ThomasSimonini/ppo-LunarLander-v2/resolve/main/replay.mp4" type="video/mp4"></video>
|
||||
@@ -71,7 +69,7 @@ At the end of the notebook, you will:
|
||||
|
||||
|
||||
|
||||
## This notebook is from Deep Reinforcement Learning Course
|
||||
## This hands-on is from Deep Reinforcement Learning Course
|
||||
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/deep-rl-course-illustration.jpg" alt="Deep RL Course illustration"/>
|
||||
|
||||
In this free course, you will:
|
||||
@@ -90,7 +88,7 @@ The best way to keep in touch and ask questions is to join our discord server to
|
||||
## Prerequisites 🏗️
|
||||
Before diving into the notebook, you need to:
|
||||
|
||||
🔲 📝 **Done Unit 0** that gives you all the **information about the course and help you to onboard** 🤗
|
||||
🔲 📝 **Read Unit 0** that gives you all the **information about the course and help you to onboard** 🤗
|
||||
|
||||
🔲 📚 **Develop an understanding of the foundations of Reinforcement learning** (MC, TD, Rewards hypothesis...) by doing Unit 1
|
||||
|
||||
|
||||
@@ -58,6 +58,6 @@ But you'll study an example with gamma = 0.99 in the Q-Learning section of this
|
||||
|
||||
|
||||
|
||||
To recap, the idea of the Bellman equation is that instead of calculating each value as the sum of the expected return, **which is a long process.** This is equivalent **to the sum of immediate reward + the discounted value of the state that follows.**
|
||||
To recap, the idea of the Bellman equation is that instead of calculating each value as the sum of the expected return, **which is a long process**, we calculate the value as **the sum of immediate reward + the discounted value of the state that follows.**
|
||||
|
||||
Before going to the next section, think about the role of gamma in the Bellman equation. What happens if the value of gamma is very low (e.g. 0.1 or even 0)? What happens if the value is 1? What happens if the value is very high, such as a million?
|
||||
|
||||
@@ -76,8 +76,7 @@ For instance, if we train a state-value function using Monte Carlo:
|
||||
|
||||
## Temporal Difference Learning: learning at each step [[td-learning]]
|
||||
|
||||
- **Temporal Difference, on the other hand, waits for only one interaction (one step) \\(S_{t+1}\\)**
|
||||
- to form a TD target and update \\(V(S_t)\\) using \\(R_{t+1}\\) and \\( \gamma * V(S_{t+1})\\).
|
||||
**Temporal Difference, on the other hand, waits for only one interaction (one step) \\(S_{t+1}\\)** to form a TD target and update \\(V(S_t)\\) using \\(R_{t+1}\\) and \\( \gamma * V(S_{t+1})\\).
|
||||
|
||||
The idea with **TD is to update the \\(V(S_t)\\) at each step.**
|
||||
|
||||
|
||||
@@ -36,7 +36,7 @@ Consequently, whatever method you use to solve your problem, **you will have a
|
||||
So the difference is:
|
||||
|
||||
- In policy-based, **the optimal policy (denoted π\*) is found by training the policy directly.**
|
||||
- In value-based, **finding an optimal value function (denoted Q\* or V\*, we'll study the difference after) in our leads to having an optimal policy.**
|
||||
- In value-based, **finding an optimal value function (denoted Q\* or V\*, we'll study the difference after) leads to having an optimal policy.**
|
||||
|
||||
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit3/link-value-policy.jpg" alt="Link between value and policy"/>
|
||||
|
||||
|
||||
Reference in New Issue
Block a user