Search Results for author: Visvanathan Ramesh

Found 14 papers, 6 papers with code

Representation Learning in a Decomposed Encoder Design for Bio-inspired Hebbian Learning

no code implementations22 Nov 2023 Achref Jaziri, Sina Ditzel, Iuliia Pliushch, Visvanathan Ramesh

Our findings indicate that this form of inductive bias can be beneficial in closing the gap between models with local plasticity rules and backpropagation models as well as learning more robust representations in general.

Inductive Bias Representation Learning

A Procedural World Generation Framework for Systematic Evaluation of Continual Learning

2 code implementations4 Jun 2021 Timm Hess, Martin Mundt, Iuliia Pliushch, Visvanathan Ramesh

Several families of continual learning techniques have been proposed to alleviate catastrophic interference in deep neural network training on non-stationary data.

Continual Learning

When Deep Classifiers Agree: Analyzing Correlations between Learning Order and Image Statistics

1 code implementation19 May 2021 Iuliia Pliushch, Martin Mundt, Nicolas Lupp, Visvanathan Ramesh

Although a plethora of architectural variants for deep classification has been introduced over time, recent works have found empirical evidence towards similarities in their training process.

A Wholistic View of Continual Learning with Deep Neural Networks: Forgotten Lessons and the Bridge to Active and Open World Learning

no code implementations3 Sep 2020 Martin Mundt, Yongwon Hong, Iuliia Pliushch, Visvanathan Ramesh

In this work we critically survey the literature and argue that notable lessons from open set recognition, identifying unknown examples outside of the observed set, and the adjacent field of active learning, querying data to maximize the expected performance gain, are frequently overlooked in the deep learning era.

Active Learning Continual Learning +1

Fundamental Issues Regarding Uncertainties in Artificial Neural Networks

no code implementations25 Feb 2020 Neil A. Thacker, Carole J. Twining, Paul D. Tar, Scott Notley, Visvanathan Ramesh

Artificial Neural Networks (ANNs) implement a specific form of multi-variate extrapolation and will generate an output for any input pattern, even when there is no similar training pattern.

Open Set Recognition Through Deep Neural Network Uncertainty: Does Out-of-Distribution Detection Require Generative Classifiers?

no code implementations26 Aug 2019 Martin Mundt, Iuliia Pliushch, Sagnik Majumder, Visvanathan Ramesh

We present an analysis of predictive uncertainty based out-of-distribution detection for different approaches to estimate various models' epistemic uncertainty and contrast it with extreme value theory based open set recognition.

Open Set Learning Out-of-Distribution Detection

Unified Probabilistic Deep Continual Learning through Generative Replay and Open Set Recognition

3 code implementations28 May 2019 Martin Mundt, Iuliia Pliushch, Sagnik Majumder, Yongwon Hong, Visvanathan Ramesh

Modern deep neural networks are well known to be brittle in the face of unknown data instances and recognition of the latter remains a challenge.

Audio Classification Bayesian Inference +3

Building effective deep neural networks one feature at a time

no code implementations ICLR 2018 Martin Mundt, Tobias Weis, Kishore Konda, Visvanathan Ramesh

Successful training of convolutional neural networks is often associated with suffi- ciently deep architectures composed of high amounts of features.

Feature Importance

Building effective deep neural network architectures one feature at a time

no code implementations18 May 2017 Martin Mundt, Tobias Weis, Kishore Konda, Visvanathan Ramesh

Successful training of convolutional neural networks is often associated with sufficiently deep architectures composed of high amounts of features.

Feature Importance

Model-driven Simulations for Deep Convolutional Neural Networks

no code implementations31 May 2016 V. S. R. Veeravasarapu, Constantin Rothkopf, Visvanathan Ramesh

The use of simulated virtual environments to train deep convolutional neural networks (CNN) is a currently active practice to reduce the (real)data-hungriness of the deep CNN models, especially in application domains in which large scale real data and/or groundtruth acquisition is difficult or laborious.

Domain Adaptation valid

Cannot find the paper you are looking for? You can Submit a new open access paper.