diff --git a/units/en/_toctree.yml b/units/en/_toctree.yml index 2c1e2fd..ac63d1e 100644 --- a/units/en/_toctree.yml +++ b/units/en/_toctree.yml @@ -202,6 +202,28 @@ title: PPO with Sample Factory and Doom - local: unit8/conclusion-sf title: Conclusion +- title: Bonus Unit 3. Advanced Topics in Reinforcement Learning + sections: + - local: unitbonus3/introduction + title: Introduction + - local: unitbonus3/model-based + title: Model-Based Reinforcement Learning + - local: unitbonus3/offline-online + title: Offline vs. Online Reinforcement Learning + - local: unitbonus3/rlhf + title: Reinforcement Learning from Human Feedback + - local: unitbonus3/decision-transformers + title: Decision Transformers and Offline RL + - local: unitbonus3/language-models + title: Language models in RL + - local: unitbonus3/curriculum-learning + title: (Automatic) Curriculum Learning for RL + - local: unitbonus3/envs-to-try + title: Interesting environments to try + - local: unitbonus3/godotrl + title: An Introduction to Godot RL + - local: unitbonus3/rl-documentation + title: Brief introduction to RL documentation - title: What's next? New Units Publishing Schedule sections: - local: communication/publishing-schedule diff --git a/units/en/unitbonus3/curriculum-learning.mdx b/units/en/unitbonus3/curriculum-learning.mdx new file mode 100644 index 0000000..dbe8e64 --- /dev/null +++ b/units/en/unitbonus3/curriculum-learning.mdx @@ -0,0 +1,54 @@ +# (Automatic) Curriculum Learning for RL + +While most of the RL methods seen in this course work well in practice, there are some cases where using them alone fails. It is for instance the case where: + +- the task to learn is hard and requires an **incremental acquisition of skills** (for instance when one wants to make a bipedal agent learn to go through hard obstacles, it must first learn to stand, then walk, then maybe jump…) +- there are variations in the environment (that affect the difficulty) and one wants its agent to be **robust** to them + +
+Bipedal +Movable creepers +
TeachMyAgent
+
+ +In such cases, it seems needed to propose different tasks to our RL agent and organize them such that it allows the agent to progressively acquire skills. This approach is called **Curriculum Learning** and usually implies a hand-designed curriculum (or set of tasks organized in a specific order). In practice, one can for instance control the generation of the environment, the initial states, or use Self-Play an control the level of opponents proposed to the RL agent. + +As designing such a curriculum is not always trivial, the field of **Automatic Curriculum Learning (ACL) proposes to design approaches that learn to create such and organization of tasks in order to maximize the RL agent’s performances**. Portelas et al. proposed to define ACL as: + +> … a family of mechanisms that automatically adapt the distribution of training data by learning to adjust the selection of learning situations to the capabilities of RL agents. +> + +As an example, OpenAI used **Domain Randomization** (they applied random variations on the environment) to make a robot hand solve Rubik’s Cubes. + + +
+Dr +
OpenAI - Solving Rubik’s Cube with a Robot Hand
+
+ +Finally, you can play with the robustness of agents trained in the TeachMyAgent benchmark by controlling environment variations or even drawing the terrain 👇 + +
+Demo +
https://huggingface.co/spaces/flowers-team/Interactive_DeepRL_Demo
+
+ + +## Further reading + +For more information, we recommend you check out the following resources: + +### Overview of the field + +- [Automatic Curriculum Learning For Deep RL: A Short Survey](https://arxiv.org/pdf/2003.04664.pdf) +- [Curriculum for Reinforcement Learning](https://lilianweng.github.io/posts/2020-01-29-curriculum-rl/) + +### Recent methods + +- [Evolving Curricula with Regret-Based Environment Design](https://arxiv.org/abs/2203.01302) +- [Curriculum Reinforcement Learning via Constrained Optimal Transport](https://proceedings.mlr.press/v162/klink22a.html) +- [Prioritized Level Replay](https://arxiv.org/abs/2010.03934) + +## Author + +This section was written by Clément Romac diff --git a/units/en/unitbonus3/decision-transformers.mdx b/units/en/unitbonus3/decision-transformers.mdx new file mode 100644 index 0000000..737564e --- /dev/null +++ b/units/en/unitbonus3/decision-transformers.mdx @@ -0,0 +1,31 @@ +# Decision Transformers + +The Decision Transformer model was introduced by ["Decision Transformer: Reinforcement Learning via Sequence Modeling” by Chen L. et al](https://arxiv.org/abs/2106.01345). It abstracts Reinforcement Learning as a conditional-sequence modeling problem. + +The main idea is that instead of training a policy using RL methods, such as fitting a value function, that will tell us what action to take to maximize the return (cumulative reward), **we use a sequence modeling algorithm (Transformer) that, given a desired return, past states, and actions, will generate future actions to achieve this desired return**. +It’s an autoregressive model conditioned on the desired return, past states, and actions to generate future actions that achieve the desired return. + +This is a complete shift in the Reinforcement Learning paradigm since we use generative trajectory modeling (modeling the joint distribution of the sequence of states, actions, and rewards) to replace conventional RL algorithms. It means that in Decision Transformers, we don’t maximize the return but rather generate a series of future actions that achieve the desired return. + +The 🤗 Transformers team integrated the Decision Transformer, an Offline Reinforcement Learning method, into the library as well as the Hugging Face Hub. + +## Learn about Decision Transformers + +To learn more about Decision Transformers, you should read the blogpost we wrote about it [Introducing Decision Transformers on Hugging Face](https://huggingface.co/blog/decision-transformers) + +## Train your first Decision Transformers + +Now that you understand how Decision Transformers work thanks to [Introducing Decision Transformers on Hugging Face](https://huggingface.co/blog/decision-transformers). You’re ready to learn to train your first Offline Decision Transformer model from scratch to make a half-cheetah run. + +Start the tutorial here 👉 https://huggingface.co/blog/train-decision-transformers + +## Further reading + +For more information, we recommend you check out the following resources: + +- [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) +- [Online Decision Transformer](https://arxiv.org/abs/2202.05607) + +## Author + +This section was written by Edward Beeching diff --git a/units/en/unitbonus3/envs-to-try.mdx b/units/en/unitbonus3/envs-to-try.mdx new file mode 100644 index 0000000..404e038 --- /dev/null +++ b/units/en/unitbonus3/envs-to-try.mdx @@ -0,0 +1,49 @@ +# Interesting Environments to try + +We provide here a list of interesting environments you can try to train your agents on: + +## MineRL + +MineRL + + +MineRL is a Python library that provides a Gym interface for interacting with the video game Minecraft, accompanied by datasets of human gameplay. +Every year, there are challenges with this library. Check the [website](https://minerl.io/) + +To start using this environment, check these resources: +- [What is MineRL?](https://www.youtube.com/watch?v=z6PTrGifupU) +- [First steps in MineRL](https://www.youtube.com/watch?v=8yIrWcyWGek) +- [MineRL documentation and tutorials](https://minerl.readthedocs.io/en/latest/) + +## DonkeyCar Simulator + +Donkey Car +Donkey is a Self Driving Car Platform for hobby remote control cars. +This simulator version is built on the Unity game platform. It uses their internal physics and graphics and connects to a donkey Python process to use our trained model to control the simulated Donkey (car). + + +To start using this environment, check these resources: +- [DonkeyCar Simulator documentation](https://docs.donkeycar.com/guide/deep_learning/simulator/) +- [Learn to Drive Smoothly (Antonin Raffin's tutorial) Part 1](https://www.youtube.com/watch?v=ngK33h00iBE) +- [Learn to Drive Smoothly (Antonin Raffin's tutorial) Part 2](https://www.youtube.com/watch?v=DUqssFvcSOY) +- [Learn to Drive Smoothly (Antonin Raffin's tutorial) Part 3](https://www.youtube.com/watch?v=v8j2bpcE4Rg) + +- Pretrained agents: + - https://huggingface.co/araffin/tqc-donkey-mountain-track-v0 + - https://huggingface.co/araffin/tqc-donkey-avc-sparkfun-v0 + - https://huggingface.co/araffin/tqc-donkey-minimonaco-track-v0 + + +## Starcraft II + +Alphastar + +Starcraft II is a famous *real-time strategy game*. DeepMind has used this game for their Deep Reinforcement Learning research with [Alphastar](https://www.deepmind.com/blog/alphastar-mastering-the-real-time-strategy-game-starcraft-ii) + +To start using this environment, check these resources: +- [Starcraft gym](http://starcraftgym.com/) +- [A. I. Learns to Play Starcraft 2 (Reinforcement Learning) tutorial](https://www.youtube.com/watch?v=q59wap1ELQ4) + +## Author + +This section was written by Thomas Simonini diff --git a/units/en/unitbonus3/godotrl.mdx b/units/en/unitbonus3/godotrl.mdx new file mode 100644 index 0000000..8e993a3 --- /dev/null +++ b/units/en/unitbonus3/godotrl.mdx @@ -0,0 +1,208 @@ +# Godot RL Agents + +[Godot RL Agents](https://github.com/edbeeching/godot_rl_agents) is an Open Source package that allows video game creators, AI researchers and hobbyists the opportunity **to learn complex behaviors for their Non Player Characters or agents**. + +The library provides: + +- An interface between games created in the [Godot Engine](https://godotengine.org/) and Machine Learning algorithms running in Python +- Wrappers for four well known rl frameworks: [StableBaselines3](https://stable-baselines3.readthedocs.io/en/master/), [CleanRL](https://docs.cleanrl.dev/), [Sample Factory](https://www.samplefactory.dev/) and [Ray RLLib](https://docs.ray.io/en/latest/rllib-algorithms.html) +- Support for memory-based agents with LSTM or attention based interfaces +- Support for *2D and 3D games* +- A suite of *AI sensors* to augment your agent's capacity to observe the game world +- Godot and Godot RL Agents are **completely free and open source under a very permissive MIT license**. No strings attached, no royalties, nothing. + +You can find out more about Godot RL agents on their [GitHub page](https://github.com/edbeeching/godot_rl_agents) or their AAAI-2022 Workshop [paper](https://arxiv.org/abs/2112.03636). The library's creator, [Ed Beeching](https://edbeeching.github.io/), is a Research Scientist here at Hugging Face. + +## Create a custom RL environment with Godot RL Agents + +In this section, you will **learn how to create a custom environment in the Godot Game Engine** and then implement an AI controller that learns to play with Deep Reinforcement Learning. + +The example game we create today is simple, **but shows off many of the features of the Godot Engine and the Godot RL Agents library**.You can then dive into the examples for more complex environments and behaviors. + +The environment we will be building today is called Ring Pong, the game of pong but the pitch is a ring and the paddle moves around the ring. The **objective is to keep the ball bouncing inside the ring**. + +Ring Pong + +### Installing the Godot Game Engine + +The [Godot game engine](https://godotengine.org/) is an open source tool for the **creation of video games, tools and user interfaces**. + +Godot Engine is a feature-packed, cross-platform game engine designed to create 2D and 3D games from a unified interface. It provides a comprehensive set of common tools, so users **can focus on making games without having to reinvent the wheel**. Games can be exported in one click to a number of platforms, including the major desktop platforms (Linux, macOS, Windows) as well as mobile (Android, iOS) and web-based (HTML5) platforms. + +While we will guide you through the steps to implement your agent, you may wish to learn more about the Godot Game Engine. Their [documentation](https://docs.godotengine.org/en/latest/index.html) is thorough, there are many tutorials on YouTube we would also recommend [GDQuest](https://www.gdquest.com/), [KidsCanCode](https://kidscancode.org/godot_recipes/4.x/) and [Bramwell](https://www.youtube.com/channel/UCczi7Aq_dTKrQPF5ZV5J3gg) as sources of information. + +In order to create games in Godot, **you must first download the editor**. The latest version Godot RL agents was updated to use Godot 4 beta, as we are expecting this to be released in the next few months. + +At the time of writing the latest beta version was beta 14 which can be downloaded at the following links: + +- [Windows](https://downloads.tuxfamily.org/godotengine/4.0/beta14/Godot_v4.0-beta14_win64.exe.zip) +- [Mac](https://downloads.tuxfamily.org/godotengine/4.0/beta14/Godot_v4.0-beta14_macos.universal.zip) +- [Linux](https://downloads.tuxfamily.org/godotengine/4.0/beta14/Godot_v4.0-beta14_linux.x86_64.zip) + +### Loading the starter project + +We provide two versions of the codebase: +- [A starter project, to download and follow along for this tutorial](https://drive.google.com/file/d/1C7xd3TibJHlxFEJPBgBLpksgxrFZ3D8e/view?usp=share_link) +- [A final version of the project, for comparison and debugging.](https://drive.google.com/file/d/1k-b2Bu7uIA6poApbouX4c3sq98xqogpZ/view?usp=share_link) + +To load the project, in the Godot Project Manager click **Import**, navigate to where the files are located and load the **project.godot** file. + +If you press F5 or play in the editor, you should be able to play the game in human mode. There are several instances of the game running, this is because we want to speed up training our AI agent with many parallel environments. + +### Installing the Godot RL Agents plugin + +The Godot RL Agents plugin can be installed from the Github repo or with the Godot Asset Lib in the editor. + +First click on the AssetLib and search for “rl” + +Godot + +Then click on Godot RL Agents, click Download and unselect the LICIENSE and [README.md](http://README.md) files. Then click install. + +Godot + + +The Godot RL Agents plugin is now downloaded to your machine your machine. Now click on Project → Project settings and enable the addon: + +Godot + + +### Adding the AI controller + +We now want to add an AI controller to our game. Open the player.tscn scene, on the left you should see a hierarchy of nodes that looks like this: + +Godot + +Right click the **Player** node and click **Add Child Node.** There are many nodes listed here, search for AIController3D and create it. + +Godot + +The AI Controller Node should have been added to the scene tree, next to it is a scroll. Click on it to open the script that is attached to the AIController. The Godot game engine uses a scripting language called GDScript, which is syntactically similar to python. The script contains methods that need to be implemented in order to get our AI controller working. + +```python +#-- Methods that need implementing using the "extend script" option in Godot --# +func get_obs() -> Dictionary: + assert(false, "the get_obs method is not implemented when extending from ai_controller") + return {"obs":[]} + +func get_reward() -> float: + assert(false, "the get_reward method is not implemented when extending from ai_controller") + return 0.0 + +func get_action_space() -> Dictionary: + assert(false, "the get get_action_space method is not implemented when extending from ai_controller") + return { + "example_actions_continous" : { + "size": 2, + "action_type": "continuous" + }, + "example_actions_discrete" : { + "size": 2, + "action_type": "discrete" + }, + } + +func set_action(action) -> void: + assert(false, "the get set_action method is not implemented when extending from ai_controller") +# -----------------------------------------------------------------------------# +``` + +In order to implement these methods, we will need to create a class that inherits from AIController3D. This is easy to do in Godot, and is called “extending” a class. + +Right click the AIController3D Node and click “Extend Script” and call the new script `controller.gd`. You should now have an almost empty script file that looks like this: + +```python +extends AIController3D + +# Called when the node enters the scene tree for the first time. +func _ready(): + pass # Replace with function body. + +# Called every frame. 'delta' is the elapsed time since the previous frame. +func _process(delta): + pass +``` + +We will now implement the 4 missing methods, delete this code and replace it with the following: + +```python +extends AIController3D + +# Stores the action sampled for the agent's policy, running in python +var move_action : float = 0.0 + +func get_obs() -> Dictionary: + # get the balls position and velocity in the paddle's frame of reference + var ball_pos = to_local(_player.ball.global_position) + var ball_vel = to_local(_player.ball.linear_velocity) + var obs = [ball_pos.x, ball_pos.z, ball_vel.x/10.0, ball_vel.z/10.0] + + return {"obs":obs} + +func get_reward() -> float: + return reward + +func get_action_space() -> Dictionary: + return { + "move_action" : { + "size": 1, + "action_type": "continuous" + }, + } + +func set_action(action) -> void: + move_action = clamp(action["move_action"][0], -1.0, 1.0) +``` + +We have now defined the agent’s observation, which is the position and velocity of the ball in its local cooridinate space. We have also defined the action space of the agent, which is a single contuninous value ranging from -1 to +1. + +The next step is to update the Player’s script to use the actions from the AIController, edit the Player’s script by clicking on the scroll next to the player node, update the code in `Player.gd` to the following the following: + +```python +extends Node3D + +@export var rotation_speed = 3.0 +@onready var ball = get_node("../Ball") +@onready var ai_controller = $AIController3D + +func _ready(): + ai_controller.init(self) + +func game_over(): + ai_controller.done = true + ai_controller.needs_reset = true + +func _physics_process(delta): + if ai_controller.needs_reset: + ai_controller.reset() + ball.reset() + return + + var movement : float + if ai_controller.heuristic == "human": + movement = Input.get_axis("rotate_anticlockwise", "rotate_clockwise") + else: + movement = ai_controller.move_action + rotate_y(movement*delta*rotation_speed) + +func _on_area_3d_body_entered(body): + ai_controller.reward += 1.0 +``` + +We now need to synchronize between the game running in Godot and the neural network being trained in Python. Godot RL agents provides a node that does just that. Open the train.tscn scene, right click on the root node and click “Add child node”. Then, search for “sync” and add a Godot RL Agents Sync node. This node handles the communication between Python and Godot over TCP. + +You can run training live in the the editor, but first launching the python training with `python examples/clean_rl_example.py —env-id=debug` + +In this simple example, a reasonable policy is learned in several minutes. You may wish to speed up training, click on the Sync node in the train scene and you will see there is a “Speed Up” property exposed in the editor: + +Godot + +Try setting this property up to 8 to speed up training. This can be a great benefit on more complex environments, like the multi-player FPS we will learn about in the next chapter. + +### There’s more! + +We have only scratched the surface of what can be achieved with Godot RL Agents, the library includes custom sensors and cameras to enrich the information available to the agent. Take a look at the [examples](https://github.com/edbeeching/godot_rl_agents_examples) to find out more! + +## Author + +This section was written by Edward Beeching diff --git a/units/en/unitbonus3/introduction.mdx b/units/en/unitbonus3/introduction.mdx new file mode 100644 index 0000000..50b4bd0 --- /dev/null +++ b/units/en/unitbonus3/introduction.mdx @@ -0,0 +1,11 @@ +# Introduction + +Unit bonus 3 thumbnail + + +Congratulations on finishing this course! **You now have a solid background in Deep Reinforcement Learning**. +But this course was just the beginning of your Deep Reinforcement Learning journey, there are so many subsections to discover. In this optional unit, we **give you resources to explore multiple concepts and research topics in Reinforcement Learning**. + +Contrary to other units, this unit is a collective work of multiple people from Hugging Face. We mention the author for each unit. + +Sounds fun? Let's get started 🔥, diff --git a/units/en/unitbonus3/language-models.mdx b/units/en/unitbonus3/language-models.mdx new file mode 100644 index 0000000..0fffc19 --- /dev/null +++ b/units/en/unitbonus3/language-models.mdx @@ -0,0 +1,45 @@ +# Language models in RL +## LMs encode useful knowledge for agents + +**Language models** (LMs) can exhibit impressive abilities when manipulating text such as question-answering or even step-by-step reasoning. Additionally, their training on massive text corpora allowed them to **encode various knowledge including abstract ones about the physical rules of our world** (for instance what is possible to do with an object, what happens when one rotates an object…). + +A natural question recently studied was could such knowledge benefit agents such as robots when trying to solve everyday tasks. And while these works showed interesting results, the proposed agents lacked of any learning method. **This limitation prevents these agent from adapting to the environment (e.g. fixing wrong knowledge) or learning new skills.** + +
+Language +
Source: Towards Helpful Robots: Grounding Language in Robotic Affordances
+
+ +## LMs and RL + +There is therefore a potential synergy between LMs which can bring knowledge about the world, and RL which can align and correct these knowledge by interacting with an environment. It is especially interesting from a RL point-of-view as the RL field mostly relies on the **Tabula-rasa** setup where everything is learned from scratch by agent leading to: + +1) Sample inefficiency + +2) Unexpected behaviors from humans’ eyes + +As a first attempt, the paper [“Grounding Large Language Models with Online Reinforcement Learning”](https://arxiv.org/abs/2302.02662v1) tackled the problem of **adapting or aligning a LM to a textual environment using PPO**. They showed that the knowledge encoded in the LM lead to a fast adaptation to the environment (opening avenue for sample efficiency RL agents) but also that such knowledge allowed the LM to better generalize to new tasks once aligned. + +