You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • Findings (EMNLP) 2021 • Denis Lukovnikov, Sina Daubener, Asja Fischer

While neural networks are ubiquitous in state-of-the-art semantic parsers, it has been shown that most standard models suffer from dramatic performance losses when faced with compositionally out-of-distribution (OOD) data.

1 code implementation • 26 Oct 2022 • Jonas Ricker, Simon Damm, Thorsten Holz, Asja Fischer

Diffusion models (DMs) have recently emerged as a promising method in image synthesis.

1 code implementation • 21 Jun 2022 • Mike Laszkiewicz, Johannes Lederer, Asja Fischer

Learning the tail behavior of a distribution is a notoriously difficult problem.

no code implementations • 22 Apr 2022 • Sina Däubener, Asja Fischer

Stochastic neural networks (SNNs) are random functions and predictions are gained by averaging over multiple realizations of this random function.

no code implementations • 14 Dec 2021 • Matias Pizarro, Dorothea Kolossa, Asja Fischer

We perform an empirical analysis of hybrid ASR models trained on data pre-processed in such a way.

1 code implementation • ICML Workshop INNF 2021 • Mike Laszkiewicz, Johannes Lederer, Asja Fischer

Normalizing flows, which learn a distribution by transforming the data to samples from a Gaussian base distribution, have proven powerful density approximations.

no code implementations • pproximateinference AABI Symposium 2021 • Joachim Sicking, Maram Akila, Maximilian Pintz, Tim Wirtz, Asja Fischer, Stefan Wrobel

One of the most commonly used approaches so far is Monte Carlo dropout, which is computationally cheap and easy to apply in practice.

no code implementations • 1 Jan 2021 • Denis Lukovnikov, Asja Fischer

Relational Graph Neural Networks (GNN) are a class of GNN that are capable of handling multi-relational graphs.

no code implementations • 1 Jan 2021 • Arne Peter Raulf, Ben Luis Hack, Sina Däubener, Axel Mosig, Asja Fischer

With the excessive use of neural networks in safety critical domains the need for understandable explanations of their predictions is rising.

1 code implementation • 23 Dec 2020 • Joachim Sicking, Maram Akila, Maximilian Pintz, Tim Wirtz, Asja Fischer, Stefan Wrobel

Despite of its importance for safe machine learning, uncertainty quantification for neural networks is far from being solved.

no code implementations • pproximateinference AABI Symposium 2021 • Sina Däubener, Joel Frank, Thorsten Holz, Asja Fischer

In this paper we propose to efficiently attack Bayesian neural networks with adversarial examples calculated for a deterministic network with parameters given by the mean of the posterior distribution.

no code implementations • 28 Oct 2020 • Simon Damm, Dennis Forster, Dmytro Velychko, Zhenwen Dai, Asja Fischer, Jörg Lücke

Here we show that for standard (i. e., Gaussian) VAEs the ELBO converges to a value given by the sum of three entropies: the (negative) entropy of the prior distribution, the expected (negative) entropy of the observable distribution, and the average entropy of the variational distributions (the latter is already part of the ELBO).

no code implementations • 7 Aug 2020 • Sina Däubener, Asja Fischer

Uncertainty quantification in neural networks gained a lot of attention in the past years.

no code implementations • 19 Jul 2020 • Denis Lukovnikov, Jens Lehmann, Asja Fischer

Many popular variants of graph neural networks (GNNs) that are capable of handling multi-relational graphs may suffer from vanishing gradients.

no code implementations • 10 Jul 2020 • Joachim Sicking, Maram Akila, Tim Wirtz, Sebastian Houben, Asja Fischer

Monte Carlo (MC) dropout is one of the state-of-the-art approaches for uncertainty estimation in neural networks (NNs).

no code implementations • 26 Jun 2020 • Kai Brügge, Asja Fischer, Christian Igel

We propose a modified Metropolis transition operator that behaves almost always identically to the standard Metropolis operator and prove that it ensures irreducibility and convergence to the limiting distribution in the multivariate binary case with fixed-order updates.

2 code implementations • 23 Jun 2020 • Mehdi Ali, Max Berrendorf, Charles Tapley Hoyt, Laurent Vermue, Mikhail Galkin, Sahand Sharifzadeh, Asja Fischer, Volker Tresp, Jens Lehmann

The heterogeneity in recently published knowledge graph embedding models' implementations, training, and evaluation has made fair and thorough comparisons difficult.

1 code implementation • 24 May 2020 • Sina Däubener, Lea Schönherr, Asja Fischer, Dorothea Kolossa

The neural networks for uncertainty quantification simultaneously diminish the vulnerability to the attack, which is reflected in a lower recognition accuracy of the malicious target text in comparison to a standard hybrid ASR system.

no code implementations • 15 May 2020 • Mohammad Asif Khan, Fabien Cardinaux, Stefan Uhlich, Marc Ferras, Asja Fischer

This procedure bears the problem that the generated magnitude spectrogram may not be consistent, which is required for finding a phase such that the full spectrogram has a natural-sounding speech waveform.

1 code implementation • 1 May 2020 • Mike Laszkiewicz, Asja Fischer, Johannes Lederer

Many Machine Learning algorithms are formulated as regularized optimization problems, but their performance hinges on a regularization parameter that needs to be calibrated to each application at hand.

1 code implementation • ICML 2020 • Joel Frank, Thorsten Eisenhofer, Lea Schönherr, Asja Fischer, Dorothea Kolossa, Thorsten Holz

Based on this analysis, we demonstrate how the frequency representation can be used to identify deep fake images in an automated way, surpassing state-of-the-art methods.

no code implementations • 25 Feb 2020 • Rostislav Nedelchev, Debanjan Chaudhuri, Jens Lehmann, Asja Fischer

Entity linking - connecting entity mentions in a natural language utterance to knowledge graph (KG) entities is a crucial step for question answering over KGs.

no code implementations • 22 Jul 2019 • Nilesh Chakraborty, Denis Lukovnikov, Gaurav Maheshwari, Priyansh Trivedi, Jens Lehmann, Asja Fischer

Question answering has emerged as an intuitive way of querying structured data sources, and has attracted significant advancements over the years.

no code implementations • ICLR 2019 • Agustinus Kristiadi, Asja Fischer

Despite the huge success of deep neural networks (NNs), finding good mechanisms for quantifying their prediction uncertainty is still an open problem.

no code implementations • 4 Feb 2019 • Agustinus Kristiadi, Sina Däubener, Asja Fischer

Despite the huge success of deep neural networks (NNs), finding good mechanisms for quantifying their prediction uncertainty is still an open problem.

no code implementations • 13 Nov 2018 • Denis Lukovnikov, Nilesh Chakraborty, Jens Lehmann, Asja Fischer

Translating natural language to SQL queries for table-based question answering is a challenging problem and has received significant attention from the research community.

1 code implementation • 2 Nov 2018 • Gaurav Maheshwari, Priyansh Trivedi, Denis Lukovnikov, Nilesh Chakraborty, Asja Fischer, Jens Lehmann

In this paper, we conduct an empirical investigation of neural query graph ranking approaches for the task of complex question answering over knowledge graphs.

1 code implementation • CONLL 2018 • Debanjan Chaudhuri, Agustinus Kristiadi, Jens Lehmann, Asja Fischer

Building systems that can communicate with humans is a core problem in Artificial Intelligence.

1 code implementation • ICLR 2019 • Stanisław Jastrzębski, Zachary Kenton, Nicolas Ballas, Asja Fischer, Yoshua Bengio, Amos Storkey

When studying the SGD dynamics in relation to the sharpest directions in this initial phase, we find that the SGD step is large compared to the curvature and commonly fails to minimize the loss along the sharpest directions.

1 code implementation • 3 Feb 2018 • Agustinus Kristiadi, Mohammad Asif Khan, Denis Lukovnikov, Jens Lehmann, Asja Fischer

Most of the existing work on embedding (or latent feature) based knowledge graph analysis focuses mainly on the relations between entities.

no code implementations • ICLR 2018 • Stanisław Jastrzębski, Zachary Kenton, Devansh Arpit, Nicolas Ballas, Asja Fischer, Yoshua Bengio, Amos Storkey

In particular we find that the ratio of learning rate to batch size is a key determinant of SGD dynamics and of the width of the final minima, and that higher values of the ratio lead to wider minima and often better generalization.

2 code implementations • ICLR 2018 • Henning Petzka, Asja Fischer, Denis Lukovnicov

Since their invention, generative adversarial networks (GANs) have become a popular approach for learning to model a distribution of real (unlabeled) data.

2 code implementations • ICML 2017 • Devansh Arpit, Stanisław Jastrzębski, Nicolas Ballas, David Krueger, Emmanuel Bengio, Maxinder S. Kanwal, Tegan Maharaj, Asja Fischer, Aaron Courville, Yoshua Bengio, Simon Lacoste-Julien

We examine the role of memorization in deep learning, drawing connections to capacity, generalization, and adversarial robustness.

no code implementations • 1 Feb 2016 • Björn Weghenkel, Asja Fischer, Laurenz Wiskott

We propose graph-based predictable feature analysis (GPFA), a new method for unsupervised learning of predictable features from high-dimensional time series, where high predictability is understood very generically as low variance in the distribution of the next data point given the previous ones.

no code implementations • 9 Oct 2015 • Yoshua Bengio, Asja Fischer

We show that Langevin MCMC inference in an energy-based model with latent variables has the property that the early steps of inference, starting from a stationary point, correspond to propagating error gradients into internal layers, similarly to back-propagation.

no code implementations • 6 Oct 2015 • Oswin Krause, Asja Fischer, Christian Igel

Compared to CD, it leads to a consistent estimate and may have a significantly lower bias.

no code implementations • 19 Sep 2015 • Yoshua Bengio, Thomas Mesnard, Asja Fischer, Saizheng Zhang, Yuhuai Wu

We introduce a weight update formula that is expressed only in terms of firing rates and their derivatives and that results in changes consistent with those associated with spike-timing dependent plasticity (STDP) rules and biological observations, even though the explicit timing of spikes is not needed.

1 code implementation • 12 Jun 2015 • Jorg Bornschein, Samira Shabanian, Asja Fischer, Yoshua Bengio

We present a lower-bound for the likelihood of this model and we show that optimizing this bound regularizes the model so that the Bhattacharyya distance between the bottom-up and top-down approximate distributions is minimized.

1 code implementation • 23 Dec 2014 • Dong-Hyun Lee, Saizheng Zhang, Asja Fischer, Yoshua Bengio

Back-propagation has been the workhorse of recent successes of deep learning but it relies on infinitesimal effects (partial derivatives) in order to perform credit assignment.

1 code implementation • 6 Nov 2013 • Jan Melchior, Asja Fischer, Laurenz Wiskott

This work analyzes centered binary Restricted Boltzmann Machines (RBMs) and binary Deep Boltzmann Machines (DBMs), where centering is done by subtracting offset values from visible and hidden variables.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.