From 523331064aef47436f51d79ee91c68567ec32752 Mon Sep 17 00:00:00 2001
From: Ivan-267 <61947090+Ivan-267@users.noreply.github.com>
Date: Wed, 19 Jun 2024 18:40:26 +0200
Subject: [PATCH] Update train-our-robot.mdx
Adds the inference video to the train-our-robot section.
---
units/en/unitbonus5/train-our-robot.mdx | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/units/en/unitbonus5/train-our-robot.mdx b/units/en/unitbonus5/train-our-robot.mdx
index 1bd74e3..3492a0f 100644
--- a/units/en/unitbonus5/train-our-robot.mdx
+++ b/units/en/unitbonus5/train-our-robot.mdx
@@ -45,10 +45,9 @@ en/unit13/onnx_inference_scene.jpg" alt="onnx inference scene"/>
**Press F6 to start the scene and let’s see what the agent has learned!**
-
-You can see a video of the trained agent in getting started.
-
+Video of the trained agent:
+
It seems the agent is capable of collecting the key from both positions (left platform or right platform) and replicates the recorded behavior well. **If you’re getting similar results, well done, you’ve successfully completed this tutorial!** 🏆👏
-If your results are different, note that the amount and quality of recorded demos can affect the results, and adjusting the number of steps for BC/GAIL stages as well as modifying the hyper-parameters in the Python script can potentially help. There’s also some run-to-run variation, so sometimes the results can be slightly different even with the same settings.
\ No newline at end of file
+If your results are significantly different, note that the amount and quality of recorded demos can affect the results, and adjusting the number of steps for BC/GAIL stages as well as modifying the hyper-parameters in the Python script can potentially help. There’s also some run-to-run variation, so sometimes the results can be slightly different even with the same settings.