From 2fe675855080c708b704e0e37fa644b9ac73693b Mon Sep 17 00:00:00 2001 From: Thomas Simonini Date: Mon, 30 May 2022 16:00:30 +0200 Subject: [PATCH] Create Unit 1 Bonus --- unit1-bonus/readme.md | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) create mode 100644 unit1-bonus/readme.md diff --git a/unit1-bonus/readme.md b/unit1-bonus/readme.md new file mode 100644 index 0000000..578c1a4 --- /dev/null +++ b/unit1-bonus/readme.md @@ -0,0 +1,21 @@ +# Unit 1: Bonus ๐ŸŽ +- Our teammate @Chris Emezue published a new leaderboard where you can compare your trained agents in new environments ๐Ÿ‘‰ https://huggingface.co/spaces/chrisjay/Deep-Reinforcement-Learning-Leaderboard + +## Try new environments ๐ŸŽฎ +Now that you've played with LunarLander-v2 Why not try these environments? ๐Ÿ”ฅ: +- ๐Ÿ—ป MountainCar-v0 https://www.gymlibrary.ml/environments/classic_control/mountain_car/ +- ๐ŸŽ๏ธ CarRacing-v1 https://www.gymlibrary.ml/environments/box2d/car_racing/ +- ๐Ÿฅถ FrozenLake-v1 https://www.gymlibrary.ml/environments/toy_text/frozen_lake/ + +## A piece of advice ๐Ÿง +The first Unit, is a very interesting one but also **a very complex one because it's where you learn the fundamentals.** + +Thatโ€™s normal if you **still feel confused with all these elements**. This was the same for me and for all people who studied RL. + +Take time to really grasp the material before continuing. Itโ€™s important to master these elements and having a solid foundations before entering the fun part. + +We published additional readings in the syllabus if you want to go deeper ๐Ÿ‘‰ https://github.com/huggingface/deep-rl-class/blob/main/unit1/README.md + +The hands-on for the first Unit are more funny experiments, but as we'll go deeper, **you'll understand better how to choose the hyperparameters and what model to use. For now, have fun, try stuff you can't break the simulations ๐Ÿš€ ** + +### Keep learning, stay awesome.