From c91e6dd546d4a4ae67c5f66fca6650cfff5c0f9e Mon Sep 17 00:00:00 2001 From: Thomas Simonini Date: Fri, 17 Feb 2023 15:39:25 +0100 Subject: [PATCH] Create introduction-sf --- units/en/unit8/introduction-sf | 10 ++++++++++ 1 file changed, 10 insertions(+) create mode 100644 units/en/unit8/introduction-sf diff --git a/units/en/unit8/introduction-sf b/units/en/unit8/introduction-sf new file mode 100644 index 0000000..b49aafe --- /dev/null +++ b/units/en/unit8/introduction-sf @@ -0,0 +1,10 @@ +# Introduction to PPO with Sample-Factory + +thumbnail + +In this second part of Unit 8, we'll get deeper into PPO optimization by using [Sample-Factory](https://samplefactory.dev/), an asynchronous implementation of the PPO algorithm, to train our agent playing [vizdoom](https://vizdoom.cs.put.edu.pl/) (an open source version of Doom). +During the notebook, you'll train your agent to play Health Gathering level, where our agent needs to collect health packs to avoid dying. And after that, you'll be able to train your agent to play more complex versions of the levels, such as Deathmatch. + +Environment + +Sounds exciting? Let's get started! 🚀