From 0a4c6c6f2ce41a7a6450ee8d603b78bcc2f4033b Mon Sep 17 00:00:00 2001 From: Artagon Date: Sat, 17 Dec 2022 14:30:19 +0100 Subject: [PATCH] fix redundant 'pair' and inconsistent Case. --- units/en/unit2/q-learning.mdx | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/units/en/unit2/q-learning.mdx b/units/en/unit2/q-learning.mdx index 52e744a..e259363 100644 --- a/units/en/unit2/q-learning.mdx +++ b/units/en/unit2/q-learning.mdx @@ -7,7 +7,7 @@ Q-Learning is an **off-policy value-based method that uses a TD approach to tra - *Value-based method*: finds the optimal policy indirectly by training a value or action-value function that will tell us **the value of each state or each state-action pair.** - *Uses a TD approach:* **updates its action-value function at each step instead of at the end of the episode.** -**Q-Learning is the algorithm we use to train our Q-Function**, an **action-value function** that determines the value of being at a particular state and taking a specific action at that state. +**Q-Learning is the algorithm we use to train our Q-function**, an **action-value function** that determines the value of being at a particular state and taking a specific action at that state.
Q-function @@ -18,16 +18,16 @@ The **Q comes from "the Quality" (the value) of that action at that state.** Let's recap the difference between value and reward: -- The *value of a state*, or a *state-action pair* is the expected cumulative reward our agent gets if it starts at this state (or state action pair) and then acts accordingly to its policy. +- The *value of a state*, or a *state-action pair* is the expected cumulative reward our agent gets if it starts at this state (or state-action pair) and then acts accordingly to its policy. - The *reward* is the **feedback I get from the environment** after performing an action at a state. -Internally, our Q-function has **a Q-table, a table where each cell corresponds to a state-action value pair value.** Think of this Q-table as **the memory or cheat sheet of our Q-function.** +Internally, our Q-function has **a Q-table, a table where each cell corresponds to a state-action pair value.** Think of this Q-table as **the memory or cheat sheet of our Q-function.** Let's go through an example of a maze. Maze example -The Q-Table is initialized. That's why all values are = 0. This table **contains, for each state, the four state-action values.** +The Q-table is initialized. That's why all values are = 0. This table **contains, for each state, the four state-action values.** Maze example @@ -35,7 +35,7 @@ Here we see that the **state-action value of the initial state and going up is Maze example -Therefore, Q-function contains a Q-table **that has the value of each-state action pair.** And given a state and action, **our Q-Function will search inside its Q-table to output the value.** +Therefore, Q-function contains a Q-table **that has the value of each-state action pair.** And given a state and action, **our Q-function will search inside its Q-table to output the value.**
Q-function @@ -43,22 +43,22 @@ Therefore, Q-function contains a Q-table **that has the value of each-state act If we recap, *Q-Learning* **is the RL algorithm that:** -- Trains a *Q-Function* (an **action-value function**), which internally is a **Q-table that contains all the state-action pair values.** -- Given a state and action, our Q-Function **will search into its Q-table the corresponding value.** -- When the training is done, **we have an optimal Q-function, which means we have optimal Q-Table.** +- Trains a *Q-function* (an **action-value function**), which internally is a **Q-table that contains all the state-action pair values.** +- Given a state and action, our Q-function **will search into its Q-table the corresponding value.** +- When the training is done, **we have an optimal Q-function, which means we have optimal Q-table.** - And if we **have an optimal Q-function**, we **have an optimal policy** since we **know for each state what is the best action to take.** Link value policy -But, in the beginning, **our Q-Table is useless since it gives arbitrary values for each state-action pair** (most of the time, we initialize the Q-Table to 0). As the agent **explores the environment and we update the Q-Table, it will give us better and better approximations** to the optimal policy. +But, in the beginning, **our Q-table is useless since it gives arbitrary values for each state-action pair** (most of the time, we initialize the Q-table to 0). As the agent **explores the environment and we update the Q-table, it will give us better and better approximations** to the optimal policy.
Q-learning -
We see here that with the training, our Q-Table is better since, thanks to it, we can know the value of each state-action pair.
+
We see here that with the training, our Q-table is better since, thanks to it, we can know the value of each state-action pair.
-Now that we understand what Q-Learning, Q-Function, and Q-Table are, **let's dive deeper into the Q-Learning algorithm**. +Now that we understand what Q-Learning, Q-function, and Q-table are, **let's dive deeper into the Q-Learning algorithm**. ## The Q-Learning algorithm [[q-learning-algo]] @@ -66,12 +66,12 @@ This is the Q-Learning pseudocode; let's study each part and **see how it works Q-learning -### Step 1: We initialize the Q-Table [[step1]] +### Step 1: We initialize the Q-table [[step1]] Q-learning -We need to initialize the Q-Table for each state-action pair. **Most of the time, we initialize with values of 0.** +We need to initialize the Q-table for each state-action pair. **Most of the time, we initialize with values of 0.** ### Step 2: Choose action using epsilon greedy strategy [[step2]] @@ -85,7 +85,7 @@ The idea is that we define epsilon ɛ = 1.0: - *With probability 1 — ɛ* : we do **exploitation** (aka our agent selects the action with the highest state-action pair value). - With probability ɛ: **we do exploration** (trying random action). -At the beginning of the training, **the probability of doing exploration will be huge since ɛ is very high, so most of the time, we'll explore.** But as the training goes on, and consequently our **Q-Table gets better and better in its estimations, we progressively reduce the epsilon value** since we will need less and less exploration and more exploitation. +At the beginning of the training, **the probability of doing exploration will be huge since ɛ is very high, so most of the time, we'll explore.** But as the training goes on, and consequently our **Q-table gets better and better in its estimations, we progressively reduce the epsilon value** since we will need less and less exploration and more exploitation. Q-learning