diff --git a/units/en/unitbonus5/train-our-robot.mdx b/units/en/unitbonus5/train-our-robot.mdx
index 1bd74e3..3492a0f 100644
--- a/units/en/unitbonus5/train-our-robot.mdx
+++ b/units/en/unitbonus5/train-our-robot.mdx
@@ -45,10 +45,9 @@ en/unit13/onnx_inference_scene.jpg" alt="onnx inference scene"/>
**Press F6 to start the scene and let’s see what the agent has learned!**
-
-You can see a video of the trained agent in getting started.
-
+Video of the trained agent:
+
It seems the agent is capable of collecting the key from both positions (left platform or right platform) and replicates the recorded behavior well. **If you’re getting similar results, well done, you’ve successfully completed this tutorial!** 🏆👏
-If your results are different, note that the amount and quality of recorded demos can affect the results, and adjusting the number of steps for BC/GAIL stages as well as modifying the hyper-parameters in the Python script can potentially help. There’s also some run-to-run variation, so sometimes the results can be slightly different even with the same settings.
\ No newline at end of file
+If your results are significantly different, note that the amount and quality of recorded demos can affect the results, and adjusting the number of steps for BC/GAIL stages as well as modifying the hyper-parameters in the Python script can potentially help. There’s also some run-to-run variation, so sometimes the results can be slightly different even with the same settings.