From 7220220848ffb2015900c7af804ab4ab4834e388 Mon Sep 17 00:00:00 2001 From: Thomas Simonini Date: Tue, 21 Feb 2023 07:15:41 +0100 Subject: [PATCH] Add Ed author --- units/en/unit8/introduction-sf.mdx | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/units/en/unit8/introduction-sf.mdx b/units/en/unit8/introduction-sf.mdx index 2fd45f4..486b416 100644 --- a/units/en/unit8/introduction-sf.mdx +++ b/units/en/unit8/introduction-sf.mdx @@ -4,8 +4,10 @@ In this second part of Unit 8, we'll get deeper into PPO optimization by using [Sample-Factory](https://samplefactory.dev/), an **asynchronous implementation of the PPO algorithm**, to train our agent playing [vizdoom](https://vizdoom.cs.put.edu.pl/) (an open source version of Doom). -During the notebook, **you'll train your agent to play Health Gathering level**, where our agent needs to collect health packs to avoid dying. And after that, you'll be able to **train your agent to play more complex versions of the levels, such as Deathmatch**. +During the notebook, **you'll train your agent to play Health Gathering level**, where our agent must collect health packs to avoid dying. And after that, you can **train your agent to play more complex versions of the levels, such as Deathmatch**. Environment Sounds exciting? Let's get started! 🚀 + +The hands-on is made by [Edward Beeching](https://twitter.com/edwardbeeching), a Machine Learning Research Scientist at Hugging Face. He worked on Godot Reinforcement Learning Agents, an open-source interface for developing environments and agents in the Godot Game Engine.