1
1
Afghanistan
Looking for experience!
Using the stable_baselines3 library, we tried to solve the problems proposed in the challenge. We used a Proximal Policy Optimization (PPO) Model. The Policy we used is a standard MLP. We tried to change the number of iteration to achieve a better performance.