From ac794eda48e1814e90912b7d7165a20cbe3a99a0 Mon Sep 17 00:00:00 2001 From: bpugnaire Date: Fri, 3 May 2024 17:52:42 +0200 Subject: [PATCH] Fix issue #518 --- units/en/unit7/hands-on.mdx | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/units/en/unit7/hands-on.mdx b/units/en/unit7/hands-on.mdx index fc45a6b..0176abe 100644 --- a/units/en/unit7/hands-on.mdx +++ b/units/en/unit7/hands-on.mdx @@ -78,6 +78,12 @@ pip install -e ./ml-agents-envs pip install -e ./ml-agents ``` + Mac users on Apple Silicon may encounter troubles with the installation (e.g. ONNX wheel build failing), you should first try to install grpcio: +```bash +conda install grpcio +``` +[This github issue](https://github.com/Unity-Technologies/ml-agents/issues/6019) in the official ml-agent repo might also help you. + Finally, you need to install git-lfs: https://git-lfs.com/ Now that it’s installed, we need to add the environment training executable. Based on your operating system you need to download one of them, unzip it and place it in a new folder inside `ml-agents` that you call `training-envs-executables` @@ -221,10 +227,16 @@ Depending on your hardware, 5M timesteps (the recommended value, but you can als Depending on the executable you use (windows, ubuntu, mac) the training command will look like this (your executable path can be different so don’t hesitate to check before running). +For Windows, it might look like this: ```bash mlagents-learn ./config/poca/SoccerTwos.yaml --env=./training-envs-executables/SoccerTwos.exe --run-id="SoccerTwos" --no-graphics ``` +For Mac, it might look like this: +```bash +mlagents-learn ./config/poca/SoccerTwos.yaml --env=./training-envs-executables/SoccerTwos/SoccerTwos.app --run-id="SoccerTwos" --no-graphics +``` + The executable contains 8 copies of SoccerTwos. ⚠️ It’s normal if you don’t see a big increase of ELO score (and even a decrease below 1200) before 2M timesteps, since your agents will spend most of their time moving randomly on the field before being able to goal.