Multi-Agent reinforcement learning(competition) using MADDPG(multi agent deep deterministic policy gradient)
- Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments : https://arxiv.org/pdf/1706.02275.pdf
For this project, i trained 2 competitive agents inspired on MADDPG algorithm, the agents compete in the unity Tennis environment.
In this environment, two agents control rackets to bounce a ball over a net. If an agent hits the ball over the net, it receives a reward of +0.1. If an agent lets a ball hit the ground or hits the ball out of bounds, it receives a reward of -0.01. Thus, the goal of each agent is to keep the ball in play.
Every agent perceives a vector of 24 variables corresponding to the position and velocity of the ball and racket. Each agent receives its own, local observation vector. Two continuous actions are available for every agent, corresponding to movement toward (or away from) the net, and jumping.
The task is episodic, and in order to solve the environment, the agents must get an average score of +0.5 (over 100 consecutive episodes, after taking the maximum over both agents). Specifically,
- After each episode, we add up the rewards that each agent received (without discounting), to get a score for each agent. This yields 2 (potentially different) scores. We then take the maximum of these 2 scores.
- This yields a single score for each episode.
The environment is considered solved, when the average (over 100 episodes) of those scores is at least +0.5.
-
Download the environment from one of the links below. You need only select the environment that matches your operating system:
- Linux: click here
- Mac OSX: click here
- Windows (32-bit): click here
- Windows (64-bit): click here
(For Windows users) Check out this link if you need help with determining if your computer is running a 32-bit version or 64-bit version of the Windows operating system.
(For AWS) If you'd like to train the agent on AWS (and have not enabled a virtual screen), then please use this link to obtain the "headless" version of the environment. You will not be able to watch the agent without enabling a virtual screen, but you will be able to train the agent. (To watch the agent, you should follow the instructions to enable a virtual screen, and then download the environment for the Linux operating system above.)
-
Place the file in the DRLND GitHub repository, in the
p3_collab-compet/
folder, and unzip (or decompress) the file. -
Create activate an Anaconda environment (or just activate one if you already have it).
-
Install all requeriments with pip install
Open notebook Tennis-MADDPG.ipynb
and click "Run all" to train and visualize the agents-
DDPG paper showed that many deep value based techniques can be applied to policy based with continous action spaces and i think that some of this value based techniques that are not in the paper but that i have applied before can further improve this policy based agent, for example:
- Prioritized experience replay.
- Double q-learning for reducing overestimation bias.
- Dueling network architecture
Additional to this i think that some recent techniques specific to policy based methods can be experimented like:
- Trust Region Policy Optimization(TRPO)
- Proximal Policy Optimization(PPO)
Besides this, experimentation with hyper-parameter search and different neural network architectures(like adding batch norm to hidden layers) could possibly help.