Search Results for author: Iuliia Pliushch

Found 7 papers, 4 papers with code

Representation Learning in a Decomposed Encoder Design for Bio-inspired Hebbian Learning

no code implementations22 Nov 2023 Achref Jaziri, Sina Ditzel, Iuliia Pliushch, Visvanathan Ramesh

Our findings indicate that this form of inductive bias can be beneficial in closing the gap between models with local plasticity rules and backpropagation models as well as learning more robust representations in general.

Inductive Bias Representation Learning

A Procedural World Generation Framework for Systematic Evaluation of Continual Learning

2 code implementations4 Jun 2021 Timm Hess, Martin Mundt, Iuliia Pliushch, Visvanathan Ramesh

Several families of continual learning techniques have been proposed to alleviate catastrophic interference in deep neural network training on non-stationary data.

Continual Learning

When Deep Classifiers Agree: Analyzing Correlations between Learning Order and Image Statistics

1 code implementation19 May 2021 Iuliia Pliushch, Martin Mundt, Nicolas Lupp, Visvanathan Ramesh

Although a plethora of architectural variants for deep classification has been introduced over time, recent works have found empirical evidence towards similarities in their training process.

A Wholistic View of Continual Learning with Deep Neural Networks: Forgotten Lessons and the Bridge to Active and Open World Learning

no code implementations3 Sep 2020 Martin Mundt, Yongwon Hong, Iuliia Pliushch, Visvanathan Ramesh

In this work we critically survey the literature and argue that notable lessons from open set recognition, identifying unknown examples outside of the observed set, and the adjacent field of active learning, querying data to maximize the expected performance gain, are frequently overlooked in the deep learning era.

Active Learning Continual Learning +1

Open Set Recognition Through Deep Neural Network Uncertainty: Does Out-of-Distribution Detection Require Generative Classifiers?

no code implementations26 Aug 2019 Martin Mundt, Iuliia Pliushch, Sagnik Majumder, Visvanathan Ramesh

We present an analysis of predictive uncertainty based out-of-distribution detection for different approaches to estimate various models' epistemic uncertainty and contrast it with extreme value theory based open set recognition.

Open Set Learning Out-of-Distribution Detection

Unified Probabilistic Deep Continual Learning through Generative Replay and Open Set Recognition

3 code implementations28 May 2019 Martin Mundt, Iuliia Pliushch, Sagnik Majumder, Yongwon Hong, Visvanathan Ramesh

Modern deep neural networks are well known to be brittle in the face of unknown data instances and recognition of the latter remains a challenge.

Audio Classification Bayesian Inference +3

Cannot find the paper you are looking for? You can Submit a new open access paper.