Search Results for author: Bernhard Nessler

Found 10 papers, 4 papers with code

Functional trustworthiness of AI systems by statistically valid testing

no code implementations4 Oct 2023 Bernhard Nessler, Thomas Doms, Sepp Hochreiter

The authors are concerned about the safety, health, and rights of the European citizens due to inadequate measures and procedures required by the current draft of the EU Artificial Intelligence (AI) Act for the conformity assessment of AI systems.

valid

The balancing principle for parameter choice in distance-regularized domain adaptation

1 code implementation NeurIPS 2021 Werner Zellinger, Natalia Shepeleva, Marius-Constantin Dinu, Hamid Eghbal-zadeh, Hoan Nguyen, Bernhard Nessler, Sergei Pereverzyev, Bernhard A. Moser

Our approach starts with the observation that the widely-used method of minimizing the source error, penalized by a distance measure between source and target feature representations, shares characteristics with regularized ill-posed inverse problems.

Unsupervised Domain Adaptation

Trusted Artificial Intelligence: Towards Certification of Machine Learning Applications

no code implementations31 Mar 2021 Philip Matthias Winter, Sebastian Eder, Johannes Weissenböck, Christoph Schwald, Thomas Doms, Tom Vogt, Sepp Hochreiter, Bernhard Nessler

Artificial Intelligence is one of the fastest growing technologies of the 21st century and accompanies us in our daily lives when interacting with technical applications.

BIG-bench Machine Learning Ethics

Coulomb GANs: Provably Optimal Nash Equilibria via Potential Fields

1 code implementation ICLR 2018 Thomas Unterthiner, Bernhard Nessler, Calvin Seward, Günter Klambauer, Martin Heusel, Hubert Ramsauer, Sepp Hochreiter

We prove that Coulomb GANs possess only one Nash equilibrium which is optimal in the sense that the model distribution equals the target distribution.

Homeostatic plasticity in Bayesian spiking networks as Expectation Maximization with posterior constraints

no code implementations NeurIPS 2012 Stefan Habenschuss, Johannes Bill, Bernhard Nessler

Recent spiking network models of Bayesian inference and unsupervised learning frequently assume either inputs to arrive in a special format or employ complex computations in neuronal activation functions and synaptic plasticity rules.

Bayesian Inference Variational Inference

STDP enables spiking neurons to detect hidden causes of their inputs

no code implementations NeurIPS 2009 Bernhard Nessler, Michael Pfeiffer, Wolfgang Maass

We show here that STDP, in conjunction with a stochastic soft winner-take-all (WTA) circuit, induces spiking neurons to generate through their synaptic weights implicit internal models for subclasses (or causes") of the high-dimensional spike patterns of hundreds of pre-synaptic neurons.

Dimensionality Reduction

Hebbian Learning of Bayes Optimal Decisions

no code implementations NeurIPS 2008 Bernhard Nessler, Michael Pfeiffer, Wolfgang Maass

Uncertainty is omnipresent when we perceive or interact with our environment, and the Bayesian framework provides computational methods for dealing with it.

Bayesian Inference Decision Making +2

Cannot find the paper you are looking for? You can Submit a new open access paper.