Update units/en/unit2/bellman-equation.mdx

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
This commit is contained in:
Thomas Simonini
2022-12-03 11:13:33 +01:00
committed by GitHub
parent 3e9e315e53
commit 87dde1584e

View File

@@ -42,7 +42,7 @@ If we go back to our example, the value of State 1= expected cumulative return i
To calculate the value of State 1: the sum of rewards **if the agent started in that state 1** and then followed the **policy for all the time steps.**
Which is equivalent to \\(V(S_{t})\\) = Immediate reward \\(R_{t+1}\\) + Discounted value of the next state \\(gamma * V(S_{t+1})\\)
This is equivalent to \\(V(S_{t})\\) = Immediate reward \\(R_{t+1}\\) + Discounted value of the next state \\(gamma * V(S_{t+1})\\)
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit3/bellman6.jpg" alt="Bellman equation"/>