Search Results for author: Praveen K. Pilly

Found 9 papers, 3 papers with code

The configurable tree graph (CT-graph): measurable problems in partially observable and distal reward environments for lifelong reinforcement learning

1 code implementation21 Jan 2023 Andrea Soltoggio, Eseoghene Ben-Iwhiwhu, Christos Peridis, Pawel Ladosz, Jeffery Dick, Praveen K. Pilly, Soheil Kolouri

This paper introduces a set of formally defined and transparent problems for reinforcement learning algorithms with the following characteristics: (1) variable degrees of observability (non-Markov observations), (2) distal and sparse rewards, (3) variable and hierarchical reward structure, (4) multiple-task generation, (5) variable problem complexity.

reinforcement-learning Reinforcement Learning (RL)

Lifelong Reinforcement Learning with Modulating Masks

1 code implementation21 Dec 2022 Eseoghene Ben-Iwhiwhu, Saptarshi Nath, Praveen K. Pilly, Soheil Kolouri, Andrea Soltoggio

The results suggest that RL with modulating masks is a promising approach to lifelong learning, to the composition of knowledge to learn increasingly complex tasks, and to knowledge reuse for efficient and faster learning.

reinforcement-learning Reinforcement Learning (RL)

Context Meta-Reinforcement Learning via Neuromodulation

1 code implementation30 Oct 2021 Eseoghene Ben-Iwhiwhu, Jeffery Dick, Nicholas A. Ketz, Praveen K. Pilly, Andrea Soltoggio

Meta-reinforcement learning (meta-RL) algorithms enable agents to adapt quickly to tasks from few samples in dynamic environments.

Continuous Control Meta Reinforcement Learning +2

Lifelong Learning with Sketched Structural Regularization

no code implementations17 Apr 2021 Haoran Li, Aditya Krishnan, Jingfeng Wu, Soheil Kolouri, Praveen K. Pilly, Vladimir Braverman

In practice and due to computational constraints, most SR methods crudely approximate the importance matrix by its diagonal.

Continual Learning Permuted-MNIST

Sliced Cramer Synaptic Consolidation for Preserving Deeply Learned Representations

no code implementations ICLR 2020 Soheil Kolouri, Nicholas A. Ketz, Andrea Soltoggio, Praveen K. Pilly

Deep neural networks suffer from the inability to preserve the learned data representation (i. e., catastrophic forgetting) in domains where the input data distribution is non-stationary, and it changes during training.

Incremental Learning

Complementary Learning for Overcoming Catastrophic Forgetting Using Experience Replay

no code implementations11 Mar 2019 Mohammad Rostami, Soheil Kolouri, Praveen K. Pilly

We sample from this distribution and utilize experience replay to avoid forgetting and simultaneously accumulate new knowledge to the abstract distribution in order to couple the current task with past experience.

Neuromodulated Goal-Driven Perception in Uncertain Domains

no code implementations16 Feb 2019 Xinyun Zou, Soheil Kolouri, Praveen K. Pilly, Jeffrey L. Krichmar

In uncertain domains, the goals are often unknown and need to be predicted by the organism or system.

valid

Cannot find the paper you are looking for? You can Submit a new open access paper.