Search Results for author: Martin Mundt

Found 23 papers, 15 papers with code

BOWLL: A Deceptively Simple Open World Lifelong Learner

1 code implementation7 Feb 2024 Roshni Kamath, Rupert Mitchell, Subarnaduti Paul, Kristian Kersting, Martin Mundt

The quest to improve scalar performance numbers on predetermined benchmarks seems to be deeply engraved in deep learning.

Novel Concepts

Self-Expanding Neural Networks

1 code implementation10 Jul 2023 Rupert Mitchell, Robin Menzenbach, Kristian Kersting, Martin Mundt

The results of training a neural network are heavily dependent on the architecture chosen; and even a modification of only its size, however small, typically involves restarting the training process.

Masked Autoencoders are Efficient Continual Federated Learners

1 code implementation6 Jun 2023 Subarnaduti Paul, Lars-Joel Frey, Roshni Kamath, Kristian Kersting, Martin Mundt

In parts, federated learning lifts this assumption, as it sets out to solve the real-world challenge of collaboratively learning a shared model from data distributed across clients.

Continual Learning Federated Learning +1

Probabilistic Circuits That Know What They Don't Know

1 code implementation13 Feb 2023 Fabrizio Ventola, Steven Braun, Zhongjie Yu, Martin Mundt, Kristian Kersting

In contrast to neural networks, they are often assumed to be well-calibrated and robust to out-of-distribution (OOD) data.

Uncertainty Quantification

FEATHERS: Federated Architecture and Hyperparameter Search

no code implementations24 Jun 2022 Jonas Seng, Pooja Prasad, Martin Mundt, Devendra Singh Dhami, Kristian Kersting

Deep neural architectures have profound impact on achieved performance in many of today's AI tasks, yet, their design still heavily relies on human prior knowledge and experience.

BIG-bench Machine Learning Federated Learning +3

A Procedural World Generation Framework for Systematic Evaluation of Continual Learning

2 code implementations4 Jun 2021 Timm Hess, Martin Mundt, Iuliia Pliushch, Visvanathan Ramesh

Several families of continual learning techniques have been proposed to alleviate catastrophic interference in deep neural network training on non-stationary data.

Continual Learning

When Deep Classifiers Agree: Analyzing Correlations between Learning Order and Image Statistics

1 code implementation19 May 2021 Iuliia Pliushch, Martin Mundt, Nicolas Lupp, Visvanathan Ramesh

Although a plethora of architectural variants for deep classification has been introduced over time, recent works have found empirical evidence towards similarities in their training process.

Adaptive Rational Activations to Boost Deep Reinforcement Learning

4 code implementations18 Feb 2021 Quentin Delfosse, Patrick Schramowski, Martin Mundt, Alejandro Molina, Kristian Kersting

Latest insights from biology show that intelligence not only emerges from the connections between neurons but that individual neurons shoulder more computational responsibility than previously anticipated.

Ranked #3 on Atari Games on Atari 2600 Skiing (using extra training data)

Atari Games General Reinforcement Learning +3

A Wholistic View of Continual Learning with Deep Neural Networks: Forgotten Lessons and the Bridge to Active and Open World Learning

no code implementations3 Sep 2020 Martin Mundt, Yongwon Hong, Iuliia Pliushch, Visvanathan Ramesh

In this work we critically survey the literature and argue that notable lessons from open set recognition, identifying unknown examples outside of the observed set, and the adjacent field of active learning, querying data to maximize the expected performance gain, are frequently overlooked in the deep learning era.

Active Learning Continual Learning +1

Open Set Recognition Through Deep Neural Network Uncertainty: Does Out-of-Distribution Detection Require Generative Classifiers?

no code implementations26 Aug 2019 Martin Mundt, Iuliia Pliushch, Sagnik Majumder, Visvanathan Ramesh

We present an analysis of predictive uncertainty based out-of-distribution detection for different approaches to estimate various models' epistemic uncertainty and contrast it with extreme value theory based open set recognition.

Open Set Learning Out-of-Distribution Detection

Unified Probabilistic Deep Continual Learning through Generative Replay and Open Set Recognition

3 code implementations28 May 2019 Martin Mundt, Iuliia Pliushch, Sagnik Majumder, Yongwon Hong, Visvanathan Ramesh

Modern deep neural networks are well known to be brittle in the face of unknown data instances and recognition of the latter remains a challenge.

Audio Classification Bayesian Inference +3

Building effective deep neural networks one feature at a time

no code implementations ICLR 2018 Martin Mundt, Tobias Weis, Kishore Konda, Visvanathan Ramesh

Successful training of convolutional neural networks is often associated with suffi- ciently deep architectures composed of high amounts of features.

Feature Importance

Building effective deep neural network architectures one feature at a time

no code implementations18 May 2017 Martin Mundt, Tobias Weis, Kishore Konda, Visvanathan Ramesh

Successful training of convolutional neural networks is often associated with sufficiently deep architectures composed of high amounts of features.

Feature Importance

Cannot find the paper you are looking for? You can Submit a new open access paper.