Search Results for author: Ghada Sokar

Found 13 papers, 12 papers with code

Mixtures of Experts Unlock Parameter Scaling for Deep RL

no code implementations13 Feb 2024 Johan Obando-Ceron, Ghada Sokar, Timon Willi, Clare Lyle, Jesse Farebrother, Jakob Foerster, Gintare Karolina Dziugaite, Doina Precup, Pablo Samuel Castro

The recent rapid progress in (self) supervised learning models is in large part predicted by empirical scaling laws: a model's performance scales proportionally to its size.

reinforcement-learning Self-Supervised Learning

Continual Learning with Dynamic Sparse Training: Exploring Algorithms for Effective Model Updates

1 code implementation28 Aug 2023 Murat Onur Yildirim, Elif Ceren Gok Yildirim, Ghada Sokar, Decebal Constantin Mocanu, Joaquin Vanschoren

Therefore, we perform a comprehensive study in which we investigate various DST components to find the best topology per task on well-known CIFAR100 and miniImageNet benchmarks in a task-incremental CL setup since our primary focus is to evaluate the performance of various DST criteria, rather than the process of mask selection.

Continual Learning

The Dormant Neuron Phenomenon in Deep Reinforcement Learning

1 code implementation24 Feb 2023 Ghada Sokar, Rishabh Agarwal, Pablo Samuel Castro, Utku Evci

In this work we identify the dormant neuron phenomenon in deep reinforcement learning, where an agent's network suffers from an increasing number of inactive neurons, thereby affecting network expressivity.

reinforcement-learning Reinforcement Learning (RL)

Where to Pay Attention in Sparse Training for Feature Selection?

1 code implementation26 Nov 2022 Ghada Sokar, Zahra Atashgahi, Mykola Pechenizkiy, Decebal Constantin Mocanu

Our proposed approach outperforms the state-of-the-art methods in terms of selecting informative features while reducing training iterations and computational costs substantially.

feature selection

Avoiding Forgetting and Allowing Forward Transfer in Continual Learning via Sparse Networks

1 code implementation11 Oct 2021 Ghada Sokar, Decebal Constantin Mocanu, Mykola Pechenizkiy

To address this challenge, we propose a new CL method, named AFAF, that aims to Avoid Forgetting and Allow Forward transfer in class-IL using fix-capacity models.

Class Incremental Learning Incremental Learning +2

Dynamic Sparse Training for Deep Reinforcement Learning

1 code implementation8 Jun 2021 Ghada Sokar, Elena Mocanu, Decebal Constantin Mocanu, Mykola Pechenizkiy, Peter Stone

In this paper, we introduce for the first time a dynamic sparse training approach for deep reinforcement learning to accelerate the training process.

Continuous Control Decision Making +3

Self-Attention Meta-Learner for Continual Learning

1 code implementation28 Jan 2021 Ghada Sokar, Decebal Constantin Mocanu, Mykola Pechenizkiy

In this paper, we propose a new method, named Self-Attention Meta-Learner (SAM), which learns a prior knowledge for continual learning that permits learning a sequence of tasks, while avoiding catastrophic forgetting.

Continual Learning Split-CIFAR-10 +1

Learning Invariant Representation for Continual Learning

1 code implementation15 Jan 2021 Ghada Sokar, Decebal Constantin Mocanu, Mykola Pechenizkiy

Finally, we analyze the role of the shared invariant representation in mitigating the forgetting problem especially when the number of replayed samples for each previous task is small.

Class Incremental Learning Incremental Learning +2

SpaceNet: Make Free Space For Continual Learning

1 code implementation15 Jul 2020 Ghada Sokar, Decebal Constantin Mocanu, Mykola Pechenizkiy

Regularization-based methods maintain a fixed model capacity; however, previous studies showed the huge performance degradation of these methods when the task identity is not available during inference (e. g. class incremental learning scenario).

Class Incremental Learning Incremental Learning +1

Topological Insights into Sparse Neural Networks

3 code implementations24 Jun 2020 Shiwei Liu, Tim Van der Lee, Anil Yaman, Zahra Atashgahi, Davide Ferraro, Ghada Sokar, Mykola Pechenizkiy, Decebal Constantin Mocanu

However, comparing different sparse topologies and determining how sparse topologies evolve during training, especially for the situation in which the sparse structure optimization is involved, remain as challenging open questions.

Cannot find the paper you are looking for? You can Submit a new open access paper.