1 code implementation • 21 Oct 2024 • Camiel Oerlemans, Bram Grooten, Michiel Braat, Alaa Alassi, Emilia Silvas, Decebal Constantin Mocanu
Predicting the behavior of road users accurately is crucial to enable the safe operation of autonomous vehicles in urban or densely populated areas.
no code implementations • 3 Oct 2024 • Boqian Wu, Qiao Xiao, Shunxin Wang, Nicola Strisciuglio, Mykola Pechenizkiy, Maurice van Keulen, Decebal Constantin Mocanu, Elena Mocanu
It is generally perceived that Dynamic Sparse Training opens the door to a new era of scalability and efficiency for artificial neural networks at, perhaps, some costs in accuracy performance for the classification task.
no code implementations • 13 Sep 2024 • Qiao Xiao, Boqian Wu, Lu Yin, Christopher Neil Gadzinski, Tianjin Huang, Mykola Pechenizkiy, Decebal Constantin Mocanu
These hard samples play a crucial role in the optimal performance of deep neural networks.
1 code implementation • 8 Aug 2024 • Zahra Atashgahi, Tennison Liu, Mykola Pechenizkiy, Raymond Veldhuis, Decebal Constantin Mocanu, Mihaela van der Schaar
Sparse Neural Networks (SNNs) have emerged as powerful tools for efficient feature selection.
no code implementations • 26 Jun 2024 • Qiao Xiao, Pingchuan Ma, Adriana Fernandez-Lopez, Boqian Wu, Lu Yin, Stavros Petridis, Mykola Pechenizkiy, Maja Pantic, Decebal Constantin Mocanu, Shiwei Liu
The recent success of Automatic Speech Recognition (ASR) is largely attributed to the ever-growing amount of training data.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • 10 Jun 2024 • Calarina Muslimani, Bram Grooten, Deepak Ranganatha Sastry Mamillapalli, Mykola Pechenizkiy, Decebal Constantin Mocanu, Matthew E. Taylor
It becomes essential that agents learn to focus on the subset of task-relevant environment features.
1 code implementation • 13 Mar 2024 • Murat Onur Yildirim, Elif Ceren Gok Yildirim, Decebal Constantin Mocanu, Joaquin Vanschoren
Class incremental learning (CIL) in an online continual learning setting strives to acquire knowledge on a series of novel classes from a data stream, using each data point only once for training.
1 code implementation • 23 Dec 2023 • Bram Grooten, Tristan Tomilin, Gautham Vasan, Matthew E. Taylor, A. Rupam Mahmood, Meng Fang, Mykola Pechenizkiy, Decebal Constantin Mocanu
Our algorithm improves the agent's focus with useful masks, while its efficient Masker network only adds 0. 2% more parameters to the original structure, in contrast to previous work.
1 code implementation • 7 Dec 2023 • Boqian Wu, Qiao Xiao, Shiwei Liu, Lu Yin, Mykola Pechenizkiy, Decebal Constantin Mocanu, Maurice van Keulen, Elena Mocanu
E2ENet achieves comparable accuracy on the large-scale challenge AMOS-CT, while saving over 68\% parameter count and 29\% FLOPs in the inference phase, compared with the previous best-performing method.
1 code implementation • 28 Aug 2023 • Murat Onur Yildirim, Elif Ceren Gok Yildirim, Ghada Sokar, Decebal Constantin Mocanu, Joaquin Vanschoren
Therefore, we perform a comprehensive study in which we investigate various DST components to find the best topology per task on well-known CIFAR100 and miniImageNet benchmarks in a task-incremental CL setup since our primary focus is to evaluate the performance of various DST criteria, rather than the process of mask selection.
1 code implementation • NeurIPS 2023 • Aleksandra I. Nowak, Bram Grooten, Decebal Constantin Mocanu, Jacek Tabor
The key components of this framework are the pruning and growing criteria, which are repeatedly applied during the training process to adjust the network's sparse connectivity.
1 code implementation • 28 May 2023 • Zahra Atashgahi, Mykola Pechenizkiy, Raymond Veldhuis, Decebal Constantin Mocanu
Efficient time series forecasting has become critical for real-world applications, particularly with deep neural networks (DNNs).
1 code implementation • 10 Mar 2023 • Zahra Atashgahi, Xuhao Zhang, Neil Kichler, Shiwei Liu, Lu Yin, Mykola Pechenizkiy, Raymond Veldhuis, Decebal Constantin Mocanu
Feature selection that selects an informative subset of variables from data not only enhances the model interpretability and performance but also alleviates the resource demands.
1 code implementation • 13 Feb 2023 • Bram Grooten, Ghada Sokar, Shibhansh Dohare, Elena Mocanu, Matthew E. Taylor, Mykola Pechenizkiy, Decebal Constantin Mocanu
Tomorrow's robots will need to distinguish useful information from noise when performing different tasks.
1 code implementation • 19 Dec 2022 • Qiao Xiao, Boqian Wu, Yu Zhang, Shiwei Liu, Mykola Pechenizkiy, Elena Mocanu, Decebal Constantin Mocanu
The receptive field (RF), which determines the region of time series to be ``seen'' and used, is critical to improve the performance for time series classification (TSC).
1 code implementation • 28 Nov 2022 • Tianjin Huang, Tianlong Chen, Meng Fang, Vlado Menkovski, Jiaxu Zhao, Lu Yin, Yulong Pei, Decebal Constantin Mocanu, Zhangyang Wang, Mykola Pechenizkiy, Shiwei Liu
Recent works have impressively demonstrated that there exists a subnetwork in randomly initialized convolutional neural networks (CNNs) that can match the performance of the fully trained dense networks at initialization, without any optimization of the weights of the network (i. e., untrained networks).
1 code implementation • 26 Nov 2022 • Ghada Sokar, Zahra Atashgahi, Mykola Pechenizkiy, Decebal Constantin Mocanu
Our proposed approach outperforms the state-of-the-art methods in terms of selecting informative features while reducing training iterations and computational costs substantially.
1 code implementation • 8 Jul 2022 • Zahra Atashgahi, Decebal Constantin Mocanu, Raymond Veldhuis, Mykola Pechenizkiy
We show that ALACPD, on average, ranks first among state-of-the-art CPD algorithms in terms of quality of the time series segmentation, and it is on par with the best performer in terms of the accuracy of the estimated change-points.
no code implementations • 30 May 2022 • Lu Yin, Vlado Menkovski, Meng Fang, Tianjin Huang, Yulong Pei, Mykola Pechenizkiy, Decebal Constantin Mocanu, Shiwei Liu
Recent works on sparse neural network training (sparse training) have shown that a compelling trade-off between performance and efficiency can be achieved by training intrinsically sparse neural networks from scratch.
1 code implementation • ICLR 2022 • Shiwei Liu, Tianlong Chen, Xiaohan Chen, Li Shen, Decebal Constantin Mocanu, Zhangyang Wang, Mykola Pechenizkiy
In this paper, we focus on sparse training and highlight a perhaps counter-intuitive finding, that random pruning at initialization can be quite powerful for the sparse training of modern neural networks.
1 code implementation • 11 Oct 2021 • Ghada Sokar, Decebal Constantin Mocanu, Mykola Pechenizkiy
To address this challenge, we propose a new CL method, named AFAF, that aims to Avoid Forgetting and Allow Forward transfer in class-IL using fix-capacity models.
2 code implementations • ICLR 2022 • Shiwei Liu, Tianlong Chen, Zahra Atashgahi, Xiaohan Chen, Ghada Sokar, Elena Mocanu, Mykola Pechenizkiy, Zhangyang Wang, Decebal Constantin Mocanu
Our framework, FreeTickets, is defined as the ensemble of these relatively cheap sparse subnetworks.
2 code implementations • NeurIPS 2021 • Shiwei Liu, Tianlong Chen, Xiaohan Chen, Zahra Atashgahi, Lu Yin, Huanyu Kou, Li Shen, Mykola Pechenizkiy, Zhangyang Wang, Decebal Constantin Mocanu
Works on lottery ticket hypothesis (LTH) and single-shot network pruning (SNIP) have raised a lot of attention currently on post-training pruning (iterative magnitude pruning), and before-training pruning (pruning at initialization).
Ranked #3 on Sparse Learning on ImageNet
1 code implementation • 8 Jun 2021 • Ghada Sokar, Elena Mocanu, Decebal Constantin Mocanu, Mykola Pechenizkiy, Peter Stone
In this paper, we introduce for the first time a dynamic sparse training approach for deep reinforcement learning to accelerate the training process.
no code implementations • 2 Mar 2021 • Decebal Constantin Mocanu, Elena Mocanu, Tiago Pinto, Selima Curci, Phuong H. Nguyen, Madeleine Gibescu, Damien Ernst, Zita A. Vale
A fundamental task for artificial intelligence is learning.
4 code implementations • 4 Feb 2021 • Shiwei Liu, Lu Yin, Decebal Constantin Mocanu, Mykola Pechenizkiy
By starting from a random sparse network and continuously exploring sparse connectivities during training, we can perform an Over-Parameterization in the space-time manifold, closing the gap in the expressibility between sparse training and dense training.
Ranked #4 on Sparse Learning on ImageNet
1 code implementation • 2 Feb 2021 • Selima Curci, Decebal Constantin Mocanu, Mykola Pechenizkiyi
Recently, sparse training methods have started to be established as a de facto approach for training and inference efficiency in artificial neural networks.
1 code implementation • 28 Jan 2021 • Ghada Sokar, Decebal Constantin Mocanu, Mykola Pechenizkiy
In this paper, we propose a new method, named Self-Attention Meta-Learner (SAM), which learns a prior knowledge for continual learning that permits learning a sequence of tasks, while avoiding catastrophic forgetting.
1 code implementation • 22 Jan 2021 • Shiwei Liu, Decebal Constantin Mocanu, Yulong Pei, Mykola Pechenizkiy
Sparse neural networks have been widely applied to reduce the computational demands of training and deploying over-parameterized deep neural networks.
1 code implementation • 15 Jan 2021 • Ghada Sokar, Decebal Constantin Mocanu, Mykola Pechenizkiy
Finally, we analyze the role of the shared invariant representation in mitigating the forgetting problem especially when the number of replayed samples for each previous task is small.
2 code implementations • 1 Dec 2020 • Zahra Atashgahi, Ghada Sokar, Tim Van der Lee, Elena Mocanu, Decebal Constantin Mocanu, Raymond Veldhuis, Mykola Pechenizkiy
This method, named QuickSelection, introduces the strength of the neuron in sparse neural networks as a criterion to measure the feature importance.
Ranked #2 on Dimensionality Reduction on EMNIST
1 code implementation • 15 Jul 2020 • Ghada Sokar, Decebal Constantin Mocanu, Mykola Pechenizkiy
Regularization-based methods maintain a fixed model capacity; however, previous studies showed the huge performance degradation of these methods when the task identity is not available during inference (e. g. class incremental learning scenario).
2 code implementations • 24 Jun 2020 • Shiwei Liu, Tim Van der Lee, Anil Yaman, Zahra Atashgahi, Davide Ferraro, Ghada Sokar, Mykola Pechenizkiy, Decebal Constantin Mocanu
However, comparing different sparse topologies and determining how sparse topologies evolve during training, especially for the situation in which the sparse structure optimization is involved, remain as challenging open questions.
no code implementations • 10 Feb 2020 • Anil Yaman, Giovanni Iacca, Decebal Constantin Mocanu, George Fletcher, Mykola Pechenizkiy
A learning process with the plasticity property often requires reinforcement signals to guide the process.
1 code implementation • 27 Jun 2019 • Shiwei Liu, Decebal Constantin Mocanu, Mykola Pechenizkiy
Large neural networks are very successful in various tasks.
no code implementations • 2 Apr 2019 • Anil Yaman, Giovanni Iacca, Decebal Constantin Mocanu, Matt Coler, George Fletcher, Mykola Pechenizkiy
Hebbian learning is a biologically plausible mechanism for modeling the plasticity property in artificial neural networks (ANNs), based on the local interactions of neurons.
no code implementations • 22 Mar 2019 • Anil Yaman, Giovanni Iacca, Decebal Constantin Mocanu, George Fletcher, Mykola Pechenizkiy
Inspired by biology, plasticity can be modeled in artificial neural networks by using Hebbian learning rules, i. e. rules that update synapses based on the neuron activations and reinforcement signals.
2 code implementations • 17 Mar 2019 • Zahra Atashgahi, Joost Pieterse, Shiwei Liu, Decebal Constantin Mocanu, Raymond Veldhuis, Mykola Pechenizkiy
Concretely, by exploiting the cosine similarity metric to measure the importance of the connections, our proposed method, Cosine similarity-based and Random Topology Exploration (CTRE), evolves the topology of sparse neural networks by adding the most important connections to the network without calculating dense gradient in the backward.
no code implementations • 26 Jan 2019 • Shiwei Liu, Decebal Constantin Mocanu, Mykola Pechenizkiy
However, LSTMs are prone to be memory-bandwidth limited in realistic applications and need an unbearable period of training and inference time as the model size is ever-increasing.
4 code implementations • 26 Jan 2019 • Shiwei Liu, Decebal Constantin Mocanu, Amarsagar Reddy Ramapuram Matavalam, Yulong Pei, Mykola Pechenizkiy
Despite the success of ANNs, it is challenging to train and deploy modern ANNs on commodity hardware due to the ever-increasing model size and the unprecedented growth in the data volumes.
no code implementations • 19 Apr 2018 • Anil Yaman, Decebal Constantin Mocanu, Giovanni Iacca, George Fletcher, Mykola Pechenizkiy
Many real-world control and classification tasks involve a large number of features.
no code implementations • 18 Apr 2018 • Decebal Constantin Mocanu, Elena Mocanu
In an attempt to solve this problem, the one-shot learning paradigm, which makes use of just one labeled sample per class and prior knowledge, becomes increasingly important.
no code implementations • 18 Jul 2017 • Elena Mocanu, Decebal Constantin Mocanu, Phuong H. Nguyen, Antonio Liotta, Michael E. Webber, Madeleine Gibescu, J. G. Slootweg
Unprecedented high volumes of data are becoming available with the growth of the advanced metering infrastructure.
2 code implementations • 15 Jul 2017 • Decebal Constantin Mocanu, Elena Mocanu, Peter Stone, Phuong H. Nguyen, Madeleine Gibescu, Antonio Liotta
Through the success of deep learning in various domains, artificial neural networks are currently among the most used artificial intelligence methods.
no code implementations • 18 Oct 2016 • Decebal Constantin Mocanu, Maria Torres Vega, Eric Eaton, Peter Stone, Antonio Liotta
Conceived in the early 1990s, Experience Replay (ER) has been shown to be a successful mechanism to allow online learning algorithms to reuse past experiences.
no code implementations • 25 Apr 2016 • Maria Torres Vega, Decebal Constantin Mocanu, Antonio Liotta
Among the various means to evaluate the quality of video streams, No-Reference (NR) methods have low computation and may be executed on thin clients.
no code implementations • 20 Apr 2016 • Decebal Constantin Mocanu, Elena Mocanu, Phuong H. Nguyen, Madeleine Gibescu, Antonio Liotta
Thirdly, we show that, for a fixed number of weights, our proposed sparse models (which by design have a higher number of hidden neurons) achieve better generative capabilities than standard fully connected RBMs and GRBMs (which by design have a smaller number of hidden neurons), at no additional computational costs.
no code implementations • 20 Apr 2016 • Decebal Constantin Mocanu, Haitham Bou Ammar, Luis Puig, Eric Eaton, Antonio Liotta
Estimation, recognition, and near-future prediction of 3D trajectories based on their two dimensional projections available from one camera source is an exceptionally difficult problem due to uncertainty in the trajectories and environment, high dimensionality of the specific trajectory states, lack of enough labeled data and so on.