ASD: A framework for generation of Task Hierarchies for transfer in Reinforcement Learning
- 0 Collaborators
ASD is a new approach to Hierarchical Reinforcement Learning. When the task hierarchies are used in the form of ASD framework, the RL agent encounters better constraints which prevent it from pursuing policies that cannot be optimal. ...learn more
Project status: Under Development
Overview / Usage
We present ASD, a new approach to Hierarchical Reinforcement Learning.
Present Hierarchical Reinforcement Learning methods construct the task hierarchies
but fail to avoid exploration where the tasks are to be performed in a
particular sequence, resulting in the agent needlessly exploring all permutations
of the tasks. We explain a method to use the existing optimal policies of some
instances of an environment to create task hierarchies for that environment. The
task hierarchies so created can be used to solve new instances of the same environment.
The task hierarchies generated uses ASD framework which establishes an
ordering of tasks. When the task hierarchies are used in the form of ASD framework,
the RL agent encounters better constraints which prevent it from pursuing
policies that cannot be optimal, enabling the agent to achieve the optimal policy
faster. We explain an algorithm to generate the task hierarchies and another algorithm
which merges the knowledge gained by multiple task hierarchies into one.
These algorithms have been evaluated on standard RL domains like Taxi-Cab and
Wargus domain.
Methodology / Approach
Two algorithms for the entire task, one for generating the task hierarchies in ASD format and another algorithm to merge the results from multiple instances, to get complete information about dynamics of the Reinforcement Learning environment.
Technologies Used
No technologies were used.