From 33b97e99ec8a3fc05ec211d280be6254fc5ca9a3 Mon Sep 17 00:00:00 2001 From: Pierre Counathe Date: Fri, 9 Feb 2024 19:21:04 -0800 Subject: [PATCH] proposal --- units/en/unit4/pg-theorem.mdx | 4 ++-- units/en/unit4/policy-gradient.mdx | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/units/en/unit4/pg-theorem.mdx b/units/en/unit4/pg-theorem.mdx index 9db62d9..7b393cc 100644 --- a/units/en/unit4/pg-theorem.mdx +++ b/units/en/unit4/pg-theorem.mdx @@ -21,13 +21,13 @@ So we have: We can rewrite the gradient of the sum as the sum of the gradient: -\\( = \sum_{\tau} \nabla_\theta P(\tau;\theta)R(\tau) \\) +\\( = \sum_{\tau} \nabla_\theta (P(\tau;\theta)R(\tau)) = \sum_{\tau} \nabla_\theta P(\tau;\theta)R(\tau) \\) as \\(R(\tau)\\) is not dependent on \\(\theta\\) We then multiply every term in the sum by \\(\frac{P(\tau;\theta)}{P(\tau;\theta)}\\)(which is possible since it's = 1) \\( = \sum_{\tau} \frac{P(\tau;\theta)}{P(\tau;\theta)}\nabla_\theta P(\tau;\theta)R(\tau) \\) -We can simplify further this since \\( \frac{P(\tau;\theta)}{P(\tau;\theta)}\nabla_\theta P(\tau;\theta) = P(\tau;\theta)\frac{\nabla_\theta P(\tau;\theta)}{P(\tau;\theta)} \\) +We can simplify further this since \\( \frac{P(\tau;\theta)}{P(\tau;\theta)}\nabla_\theta P(\tau;\theta)\\). Thus we can rewrite the sum as \\( = P(\tau;\theta)\frac{\nabla_\theta P(\tau;\theta)}{P(\tau;\theta)} \\) \\(= \sum_{\tau} P(\tau;\theta) \frac{\nabla_\theta P(\tau;\theta)}{P(\tau;\theta)}R(\tau) \\) diff --git a/units/en/unit4/policy-gradient.mdx b/units/en/unit4/policy-gradient.mdx index 1a178d6..e329e02 100644 --- a/units/en/unit4/policy-gradient.mdx +++ b/units/en/unit4/policy-gradient.mdx @@ -107,7 +107,7 @@ In a loop: - Update the weights of the policy: \\(\theta \leftarrow \theta + \alpha \hat{g}\\) We can interpret this update as follows: -- \\(\nabla_\theta log \pi_\theta(a_t|s_t)\\) is the direction of **steepest increase of the (log) probability** of selecting action at from state st. +- \\(\nabla_\theta log \pi_\theta(a_t|s_t)\\) is the direction of **steepest increase of the (log) probability** of selecting action \\(a_t\\) from state \\(s_t\\). This tells us **how we should change the weights of policy** if we want to increase/decrease the log probability of selecting action \\(a_t\\) at state \\(s_t\\). - \\(R(\tau)\\): is the scoring function: - If the return is high, it will **push up the probabilities** of the (state, action) combinations.