Search Results for author: Nicolas Fourrier

Found 7 papers, 3 papers with code

Adaptive Dependency Learning Graph Neural Networks

1 code implementation6 Dec 2023 Abishek Sriramulu, Nicolas Fourrier, Christoph Bergmeir

In this paper, we propose a hybrid approach combining neural networks and statistical structure learning models to self-learn the dependencies and construct a dynamically changing dependency graph from multivariate data aiming to enable the use of GNNs for multivariate forecasting even when a well-defined graph does not exist.

Time Series

Learning to Continually Learn Rapidly from Few and Noisy Data

1 code implementation6 Mar 2021 Nicholas I-Hsien Kuo, Mehrtash Harandi, Nicolas Fourrier, Christian Walder, Gabriela Ferraro, Hanna Suominen

Neural networks suffer from catastrophic forgetting and are unable to sequentially learn new tasks without guaranteed stationarity in data distribution.

Continual Learning Meta-Learning

Highway-Connection Classifier Networks for Plastic yet Stable Continual Learning

no code implementations1 Jan 2021 Nicholas I-Hsien Kuo, Mehrtash Harandi, Nicolas Fourrier, Christian Walder, Gabriela Ferraro, Hanna Suominen

Catastrophic forgetting occurs when a neural network is trained sequentially on multiple tasks – its weights will be continuously modified and as a result, the network will lose its ability in solving a previous task.

Continual Learning

MTL2L: A Context Aware Neural Optimiser

1 code implementation18 Jul 2020 Nicholas I-Hsien Kuo, Mehrtash Harandi, Nicolas Fourrier, Christian Walder, Gabriela Ferraro, Hanna Suominen

Learning to learn (L2L) trains a meta-learner to assist the learning of a task-specific base learner.

Multi-Task Learning

EINS: Long Short-Term Memory with Extrapolated Input Network Simplification

no code implementations25 Sep 2019 Nicholas I-Hsien Kuo, Mehrtash T. Harandi, Nicolas Fourrier, Gabriela Ferraro, Christian Walder, Hanna Suominen

This paper contrasts the two canonical recurrent neural networks (RNNs) of long short-term memory (LSTM) and gated recurrent unit (GRU) to propose our novel light-weight RNN of Extrapolated Input for Network Simplification (EINS).

Image Generation Imputation +2

DecayNet: A Study on the Cell States of Long Short Term Memories

no code implementations27 Sep 2018 Nicholas I.H. Kuo, Mehrtash T. Harandi, Hanna Suominen, Nicolas Fourrier, Christian Walder, Gabriela Ferraro

It is unclear whether the extensively applied long-short term memory (LSTM) is an optimised architecture for recurrent neural networks.

Cannot find the paper you are looking for? You can Submit a new open access paper.