Update train-our-robot.mdx

Adds the inference video to the train-our-robot section.
This commit is contained in:
Ivan-267
2024-06-19 18:40:26 +02:00
committed by GitHub
parent c720155cf6
commit 523331064a

View File

@@ -45,10 +45,9 @@ en/unit13/onnx_inference_scene.jpg" alt="onnx inference scene"/>
**Press F6 to start the scene and lets see what the agent has learned!**
<Tip>
You can see a video of the trained agent in <a href="getting-started.mdx">getting started</a>.
</Tip>
Video of the trained agent:
<video src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit13/onnx_inference_test.mp4" type="video/mp4" controls autoplay loop mute />
It seems the agent is capable of collecting the key from both positions (left platform or right platform) and replicates the recorded behavior well. **If youre getting similar results, well done, youve successfully completed this tutorial!** 🏆👏
If your results are different, note that the amount and quality of recorded demos can affect the results, and adjusting the number of steps for BC/GAIL stages as well as modifying the hyper-parameters in the Python script can potentially help. Theres also some run-to-run variation, so sometimes the results can be slightly different even with the same settings.
If your results are significantly different, note that the amount and quality of recorded demos can affect the results, and adjusting the number of steps for BC/GAIL stages as well as modifying the hyper-parameters in the Python script can potentially help. Theres also some run-to-run variation, so sometimes the results can be slightly different even with the same settings.