mirror of
https://github.com/huggingface/deep-rl-class.git
synced 2026-04-13 18:00:45 +08:00
Update
This commit is contained in:
@@ -1,4 +1,4 @@
|
||||
# Live 1: Deep RL Course. Intro, Q&A, and playing with Huggy 🐶
|
||||
# Live 1: How the course work, Q&A, and playing with Huggy 🐶
|
||||
|
||||
In this first live stream, we explained how the course work (scope, units, challenges, and more) and answered your questions.
|
||||
|
||||
@@ -6,5 +6,4 @@ And finally, we saw some LunarLander agents you've trained and play with your Hu
|
||||
|
||||
<Youtube id="JeJIswxyrsM" />
|
||||
|
||||
|
||||
To know when the next live is scheduled **check the discord server**. We will also send **you an email**. If you can't participate, don't worry, we record the live sessions.
|
||||
@@ -30,7 +30,7 @@ No, because one frame is not enough to have a sense of motion! But what if I add
|
||||
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit4/temporal-limitation-2.jpg" alt="Temporal Limitation"/>
|
||||
That’s why, to capture temporal information, we stack four frames together.
|
||||
|
||||
Then, the stacked frames are processed by three convolutional layers. These layers **allow us to capture and exploit spatial relationships in images**. But also, because frames are stacked together, **you can exploit some spatial properties across those frames**.
|
||||
Then, the stacked frames are processed by three convolutional layers. These layers **allow us to capture and exploit spatial relationships in images**. But also, because frames are stacked together, **you can exploit some temporal properties across those frames**.
|
||||
|
||||
If you don't know what are convolutional layers, don't worry. You can check the [Lesson 4 of this free Deep Reinforcement Learning Course by Udacity](https://www.udacity.com/course/deep-learning-pytorch--ud188)
|
||||
|
||||
|
||||
Reference in New Issue
Block a user