Merge pull request #523 from bpugnaire/fix-unit7-macsilicon

Fix issue #518
This commit is contained in:
Thomas Simonini
2024-05-27 12:01:20 +02:00
committed by GitHub

View File

@@ -78,6 +78,12 @@ pip install -e ./ml-agents-envs
pip install -e ./ml-agents
```
Mac users on Apple Silicon may encounter troubles with the installation (e.g. ONNX wheel build failing), you should first try to install grpcio:
```bash
conda install grpcio
```
[This github issue](https://github.com/Unity-Technologies/ml-agents/issues/6019) in the official ml-agent repo might also help you.
Finally, you need to install git-lfs: https://git-lfs.com/
Now that its installed, we need to add the environment training executable. Based on your operating system you need to download one of them, unzip it and place it in a new folder inside `ml-agents` that you call `training-envs-executables`
@@ -221,10 +227,16 @@ Depending on your hardware, 5M timesteps (the recommended value, but you can als
Depending on the executable you use (windows, ubuntu, mac) the training command will look like this (your executable path can be different so dont hesitate to check before running).
For Windows, it might look like this:
```bash
mlagents-learn ./config/poca/SoccerTwos.yaml --env=./training-envs-executables/SoccerTwos.exe --run-id="SoccerTwos" --no-graphics
```
For Mac, it might look like this:
```bash
mlagents-learn ./config/poca/SoccerTwos.yaml --env=./training-envs-executables/SoccerTwos/SoccerTwos.app --run-id="SoccerTwos" --no-graphics
```
The executable contains 8 copies of SoccerTwos.
⚠️ Its normal if you dont see a big increase of ELO score (and even a decrease below 1200) before 2M timesteps, since your agents will spend most of their time moving randomly on the field before being able to goal.