From 8966351340fbc4686bf8ff515edc57cfb4351140 Mon Sep 17 00:00:00 2001 From: Rodrigo de Frutos Date: Sun, 2 Jul 2023 15:57:56 +0200 Subject: [PATCH] FrozenLake-v1 has 16 states Minor typo --- units/en/unit3/introduction.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/units/en/unit3/introduction.mdx b/units/en/unit3/introduction.mdx index de75540..b892c75 100644 --- a/units/en/unit3/introduction.mdx +++ b/units/en/unit3/introduction.mdx @@ -6,7 +6,7 @@ In the last unit, we learned our first reinforcement learning algorithm: Q-Learning, **implemented it from scratch**, and trained it in two environments, FrozenLake-v1 ☃️ and Taxi-v3 🚕. -We got excellent results with this simple algorithm, but these environments were relatively simple because the **state space was discrete and small** (14 different states for FrozenLake-v1 and 500 for Taxi-v3). For comparison, the state space in Atari games can **contain \\(10^{9}\\) to \\(10^{11}\\) states**. +We got excellent results with this simple algorithm, but these environments were relatively simple because the **state space was discrete and small** (16 different states for FrozenLake-v1 and 500 for Taxi-v3). For comparison, the state space in Atari games can **contain \\(10^{9}\\) to \\(10^{11}\\) states**. But as we'll see, producing and updating a **Q-table can become ineffective in large state space environments.**