From 0375bafec90d218573064adce09d4e1daf3ccb89 Mon Sep 17 00:00:00 2001 From: Andrey Voroshilov Date: Sun, 8 Jan 2023 17:49:07 -0800 Subject: [PATCH] Minor typo fix --- units/en/unit3/hands-on.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/units/en/unit3/hands-on.mdx b/units/en/unit3/hands-on.mdx index b1dd03c..118d913 100644 --- a/units/en/unit3/hands-on.mdx +++ b/units/en/unit3/hands-on.mdx @@ -137,7 +137,7 @@ To train an agent with RL-Baselines3-Zoo, we just need to do two things: Here we see that: -- We use the `Atari Wrapper` that does the pre-processing (Frame reduction, grayscale, stack four frames frames), +- We use the `Atari Wrapper` that does the pre-processing (Frame reduction, grayscale, stack four frames), - We use `CnnPolicy`, since we use Convolutional layers to process the frames. - We train the model for 10 million `n_timesteps`. - Memory (Experience Replay) size is 100000, i.e. the number of experience steps you saved to train again your agent with.