diff --git a/units/en/unit8/introduction-sf.mdx b/units/en/unit8/introduction-sf.mdx index b49aafe..2fd45f4 100644 --- a/units/en/unit8/introduction-sf.mdx +++ b/units/en/unit8/introduction-sf.mdx @@ -2,8 +2,9 @@ thumbnail -In this second part of Unit 8, we'll get deeper into PPO optimization by using [Sample-Factory](https://samplefactory.dev/), an asynchronous implementation of the PPO algorithm, to train our agent playing [vizdoom](https://vizdoom.cs.put.edu.pl/) (an open source version of Doom). -During the notebook, you'll train your agent to play Health Gathering level, where our agent needs to collect health packs to avoid dying. And after that, you'll be able to train your agent to play more complex versions of the levels, such as Deathmatch. +In this second part of Unit 8, we'll get deeper into PPO optimization by using [Sample-Factory](https://samplefactory.dev/), an **asynchronous implementation of the PPO algorithm**, to train our agent playing [vizdoom](https://vizdoom.cs.put.edu.pl/) (an open source version of Doom). + +During the notebook, **you'll train your agent to play Health Gathering level**, where our agent needs to collect health packs to avoid dying. And after that, you'll be able to **train your agent to play more complex versions of the levels, such as Deathmatch**. Environment