mirror of
https://github.com/huggingface/deep-rl-class.git
synced 2026-04-09 05:40:29 +08:00
Fixed typeset in (optional) policy gradient theorem
This commit is contained in:
committed by
GitHub
parent
ebfd6d5470
commit
ab8b04f37f
@@ -28,9 +28,9 @@ We then multiply every term in the sum by \\(\frac{P(\tau;\theta)}{P(\tau;\theta
|
||||
\\( = \sum_{\tau} \frac{P(\tau;\theta)}{P(\tau;\theta)}\nabla_\theta P(\tau;\theta)R(\tau) \\)
|
||||
|
||||
|
||||
We can simplify further this since \\( \frac{P(\tau;\theta)}{P(\tau;\theta)}\nabla_\theta P(\tau;\theta)\\).
|
||||
We can simplify further this since \\( \frac{P(\tau;\theta)}{P(\tau;\theta)}\nabla_\theta P(\tau;\theta) = P(\tau;\theta)\frac{\nabla_\theta P(\tau;\theta)}{P(\tau;\theta)} \\).
|
||||
|
||||
Thus we can rewrite the sum as \\( = P(\tau;\theta)\frac{\nabla_\theta P(\tau;\theta)}{P(\tau;\theta)} \\)
|
||||
Thus we can rewrite the sum as
|
||||
|
||||
\\( P(\tau;\theta)\frac{\nabla_\theta P(\tau;\theta)}{P(\tau;\theta)}= \sum_{\tau} P(\tau;\theta) \frac{\nabla_\theta P(\tau;\theta)}{P(\tau;\theta)}R(\tau) \\)
|
||||
|
||||
|
||||
Reference in New Issue
Block a user