From 6684987502fc8fe525b86efa275a6778cf871edd Mon Sep 17 00:00:00 2001 From: Thomas Simonini Date: Fri, 17 Feb 2023 15:46:49 +0100 Subject: [PATCH] Update introduction-sf.mdx --- units/en/unit8/introduction-sf.mdx | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/units/en/unit8/introduction-sf.mdx b/units/en/unit8/introduction-sf.mdx index b49aafe..2fd45f4 100644 --- a/units/en/unit8/introduction-sf.mdx +++ b/units/en/unit8/introduction-sf.mdx @@ -2,8 +2,9 @@ thumbnail -In this second part of Unit 8, we'll get deeper into PPO optimization by using [Sample-Factory](https://samplefactory.dev/), an asynchronous implementation of the PPO algorithm, to train our agent playing [vizdoom](https://vizdoom.cs.put.edu.pl/) (an open source version of Doom). -During the notebook, you'll train your agent to play Health Gathering level, where our agent needs to collect health packs to avoid dying. And after that, you'll be able to train your agent to play more complex versions of the levels, such as Deathmatch. +In this second part of Unit 8, we'll get deeper into PPO optimization by using [Sample-Factory](https://samplefactory.dev/), an **asynchronous implementation of the PPO algorithm**, to train our agent playing [vizdoom](https://vizdoom.cs.put.edu.pl/) (an open source version of Doom). + +During the notebook, **you'll train your agent to play Health Gathering level**, where our agent needs to collect health packs to avoid dying. And after that, you'll be able to **train your agent to play more complex versions of the levels, such as Deathmatch**. Environment