mirror of
https://github.com/huggingface/deep-rl-class.git
synced 2026-04-01 01:30:56 +08:00
23 lines
1.4 KiB
Plaintext
23 lines
1.4 KiB
Plaintext
# Conclusion
|
||
|
||
Congrats on finishing this unit! You’ve just trained your first ML-Agents and shared it to the Hub 🥳.
|
||
|
||
The best way to learn is to **practice and try stuff**. Why not try another environment? [ML-Agents has 18 different environments](https://github.com/Unity-Technologies/ml-agents/blob/develop/docs/Learning-Environment-Examples.md).
|
||
|
||
For instance:
|
||
- [Worm](https://singularite.itch.io/worm), where you teach a worm to crawl.
|
||
- [Walker](https://singularite.itch.io/walker), where you teach an agent to walk towards a goal.
|
||
|
||
Check the documentation to find out how to train them and to see the list of already integrated MLAgents environments on the Hub: https://github.com/huggingface/ml-agents#getting-started
|
||
|
||
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit5/envs-unity.jpeg" alt="Example envs"/>
|
||
|
||
|
||
In the next unit, we're going to learn about multi-agents. You're going to train your first multi-agents to compete in Soccer and Snowball fight against other classmate's agents.
|
||
|
||
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit7/snowballfight.gif" alt="Snownball fight"/>
|
||
|
||
Finally, we would love **to hear what you think of the course and how we can improve it**. If you have some feedback then please 👉 [fill this form](https://forms.gle/BzKXWzLAGZESGNaE9)
|
||
|
||
### Keep Learning, stay awesome 🤗
|