Create introduction-sf

This commit is contained in:
Thomas Simonini
2023-02-17 15:39:25 +01:00
committed by GitHub
parent 83046bbf6c
commit c91e6dd546

View File

@@ -0,0 +1,10 @@
# Introduction to PPO with Sample-Factory
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit9/thumbnail2.png" alt="thumbnail"/>
In this second part of Unit 8, we'll get deeper into PPO optimization by using [Sample-Factory](https://samplefactory.dev/), an asynchronous implementation of the PPO algorithm, to train our agent playing [vizdoom](https://vizdoom.cs.put.edu.pl/) (an open source version of Doom).
During the notebook, you'll train your agent to play Health Gathering level, where our agent needs to collect health packs to avoid dying. And after that, you'll be able to train your agent to play more complex versions of the levels, such as Deathmatch.
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit10/environments.png" alt="Environment"/>
Sounds exciting? Let's get started! 🚀