diff --git a/units/en/unit0/introduction.mdx b/units/en/unit0/introduction.mdx index de30da8..e48f0c4 100644 --- a/units/en/unit0/introduction.mdx +++ b/units/en/unit0/introduction.mdx @@ -55,11 +55,11 @@ You can choose to follow this course either: - *To get a certificate of completion*: you need to complete 80% of the assignments before the end of March 2023. - *As a simple audit*: you can participate in all challenges and do assignments if you want, but you have no deadlines. +Both paths **are completely free**. Whatever path you choose, we advise you **to follow the recommended pace to enjoy the course and challenges with your fellow classmates.** You don't need to tell us which path you choose. At the end of March, when we verify the assignments **if you get more than 80% of the assignments done, you'll get a certificate.** - ## How to get most of the course? [[advice]] To get most of the course, we have some advice: diff --git a/units/en/unit1/additional-readings.mdx b/units/en/unit1/additional-readings.mdx index 73e6a1e..b881244 100644 --- a/units/en/unit1/additional-readings.mdx +++ b/units/en/unit1/additional-readings.mdx @@ -1,5 +1,7 @@ # Additional Readings [[additional-readings]] +These are **optional readings** if you want to go deeper. + ## Deep Reinforcement Learning [[deep-rl]] - [Reinforcement Learning: An Introduction, Richard Sutton and Andrew G. Barto Chapter 1, 2 and 3](http://incompleteideas.net/book/RLbook2020.pdf) diff --git a/units/en/unit1/quiz.mdx b/units/en/unit1/quiz.mdx index e89379f..3ccf3ca 100644 --- a/units/en/unit1/quiz.mdx +++ b/units/en/unit1/quiz.mdx @@ -165,4 +165,4 @@ In Reinforcement Learning, we need to **balance how much we explore the environm -Congrats on finishing this Quiz 🥳, if you missed some elements, take time to read again the chapter to reinforce (😏) your knowledge. +Congrats on finishing this Quiz 🥳, if you missed some elements, take time to read again the chapter to reinforce (😏) your knowledge, but **do not worry**: during the course we'll go over again of these concepts, and you'll **reinforce your theoretical knowledge with hands-on**. diff --git a/units/en/unit2/additional-readings.mdx b/units/en/unit2/additional-readings.mdx index ebc3fa9..9a14724 100644 --- a/units/en/unit2/additional-readings.mdx +++ b/units/en/unit2/additional-readings.mdx @@ -1,5 +1,7 @@ # Additional Readings [[additional-readings]] +These are **optional readings** if you want to go deeper. + ## Monte Carlo and TD Learning [[mc-td]] To dive deeper on Monte Carlo and Temporal Difference Learning: diff --git a/units/en/unit3/additional-readings.mdx b/units/en/unit3/additional-readings.mdx index 1c91b69..9c615fc 100644 --- a/units/en/unit3/additional-readings.mdx +++ b/units/en/unit3/additional-readings.mdx @@ -1,5 +1,7 @@ # Additional Readings [[additional-readings]] +These are **optional readings** if you want to go deeper. + - [Foundations of Deep RL Series, L2 Deep Q-Learning by Pieter Abbeel](https://youtu.be/Psrhxy88zww) - [Playing Atari with Deep Reinforcement Learning](https://arxiv.org/abs/1312.5602) - [Double Deep Q-Learning](https://papers.nips.cc/paper/2010/hash/091d584fced301b442654dd8c23b3fc9-Abstract.html)