fix number of steps of training in readme to 10M

This commit is contained in:
YaYaB
2022-06-09 13:35:38 +02:00
parent 3b1dfa2504
commit 8a5643b7ce

View File

@@ -3,8 +3,8 @@
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
"colab_type": "text",
"id": "view-in-github"
},
"source": [
"<a href=\"https://colab.research.google.com/github/huggingface/deep-rl-class/blob/main/unit3/unit3.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
@@ -278,7 +278,7 @@
"Here we see that:\n",
"- We use the Atari Wrapper that preprocess the input (Frame reduction ,grayscale, stack 4 frames)\n",
"- We use `CnnPolicy`, since we use Convolutional layers to process the frames\n",
"- We train it for 1 million `n_timesteps` \n",
"- We train it for 10 million `n_timesteps` \n",
"- Memory (Experience Replay) size is 100000"
]
},
@@ -712,12 +712,12 @@
],
"metadata": {
"colab": {
"authorship_tag": "ABX9TyPwiHKn+ccCskGi3ZMw9yH2",
"collapsed_sections": [],
"include_colab_link": true,
"name": "Copie de Unit 3: Deep Q-Learning with Space Invaders.ipynb",
"private_outputs": true,
"provenance": [],
"authorship_tag": "ABX9TyPwiHKn+ccCskGi3ZMw9yH2",
"include_colab_link": true
"provenance": []
},
"kernelspec": {
"display_name": "Python 3",
@@ -729,4 +729,4 @@
},
"nbformat": 4,
"nbformat_minor": 0
}
}