From 3e9e315e5305bfd64987bc5bd9c62ab2ffd2dd0a Mon Sep 17 00:00:00 2001 From: Thomas Simonini Date: Sat, 3 Dec 2022 11:13:19 +0100 Subject: [PATCH] Update units/en/unit2/bellman-equation.mdx Co-authored-by: Sayak Paul --- units/en/unit2/bellman-equation.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/units/en/unit2/bellman-equation.mdx b/units/en/unit2/bellman-equation.mdx index f61ec13..be70441 100644 --- a/units/en/unit2/bellman-equation.mdx +++ b/units/en/unit2/bellman-equation.mdx @@ -47,7 +47,7 @@ Which is equivalent to \\(V(S_{t})\\) = Immediate reward \\(R_{t+1}\\) + Dis Bellman equation -For simplification, here we don't discount, so gamma = 1. +In the interest of simplicity, here we don't discount, so gamma = 1. - The value of \\(V(S_{t+1}) \\) = Immediate reward \\(R_{t+2}\\) + Discounted value of the next state ( \\(gamma * V(S_{t+2})\\) ). - And so on.