Skip to content

Combination of the Options RL framework and DQNs

Notifications You must be signed in to change notification settings

vtkuzmin/HRL_DQN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

HRL_DQN

Combination of the Options RL framework and DQNs. The project contains of the following parts:

DQN

Implementation of the plain DQN algorithm

To run the plain DQN go for ../DQN/run_dqn.py where the environment, network and learning parameters can be modified; The learning process for the plain DQN is implemented in the file ../DQN/dqn.py

DQN with Options

Using DQN to represent every option and a separate DQN over options

To run the DQN algorithms with options, firstly the options must be trained running the files ../DQN with Options/train_option1(go down).py and ../DQN with Options/train_option2(lift cube).py After that to train the DQN over options run the file ../DQN with Options/train_over_options.py The learning process for the DQN over options is implemented in the file ../DQN with Options/dqn_with_options.py and the file ../DQN with Options/option_class.py contains the implementation of the option work

DQN&Options end-to-end

The united architecture

environments

The testing environment (arm_env_dqn) used in the experiments The environments for training the options (arm_env_dqn_go_down and arm_env_dqn_lift_cube)

utils

Utils needed for plotting and DQN implementation.

The file dqn_utils.py contains the following:

    huber_loss
    Constant Piecewise Linear Schedules
    compute_exponential_averages
    minimize_and_clip
    initialize_interdependent_variables
    ReplayBuffer
    ReplayBufferOptions

About

Combination of the Options RL framework and DQNs

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages