mirror of
https://github.com/huggingface/deep-rl-class.git
synced 2026-04-13 17:29:52 +08:00
Small updates Unit 2
This commit is contained in:
@@ -41,7 +41,7 @@ If we go back to our example, we can say that the value of State 1 is equal to t
|
||||
|
||||
To calculate the value of State 1: the sum of rewards **if the agent started in that state 1** and then followed the **policy for all the time steps.**
|
||||
|
||||
This is equivalent to \\(V(S_{t})\\) = Immediate reward \\(R_{t+1}\\) + Discounted value of the next state \\(gamma * V(S_{t+1})\\)
|
||||
This is equivalent to \\(V(S_{t})\\) = Immediate reward \\(R_{t+1}\\) + Discounted value of the next state \\(\gamma * V(S_{t+1})\\)
|
||||
|
||||
<figure>
|
||||
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit3/bellman6.jpg" alt="Bellman equation"/>
|
||||
|
||||
Reference in New Issue
Block a user