Update policy-gradient.mdx

This commit is contained in:
fzyzcjy
2023-11-08 12:49:55 +08:00
committed by GitHub
parent 9cb31c6e1c
commit 59bce06bea

View File

@@ -54,6 +54,7 @@ Let's give some more details on this formula:
- \\(R(\tau)\\) : Return from an arbitrary trajectory. To take this quantity and use it to calculate the expected return, we need to multiply it by the probability of each possible trajectory.
- \\(P(\tau;\theta)\\) : Probability of each possible trajectory \\(\tau\\) (that probability depends on \\( \theta\\) since it defines the policy that it uses to select the actions of the trajectory which has an impact of the states visited).
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit6/probability.png" alt="Probability"/>