1 code implementation • 21 Oct 2022 • Sebastian G. Gruber, Florian Buettner
In this work we introduce a general bias-variance decomposition for proper scores, giving rise to the Bregman Information as the variance term.
1 code implementation • 13 Apr 2022 • Arber Qoku, Florian Buettner
Many real-world systems are described not only by data from a single source but via multiple data views.
2 code implementations • 15 Mar 2022 • Sebastian G. Gruber, Florian Buettner
With model trustworthiness being crucial for sensitive real-world applications, practitioners are putting more and more focus on improving the uncertainty calibration of deep neural networks.
1 code implementation • 8 Jul 2021 • Arber Qoku, Florian Buettner
Additionally, in settings where partial knowledge on the latent structure of the data is readily available, a statistically sound integration of prior information into current methods is challenging.
1 code implementation • 8 Jun 2021 • Yinchong Yang, Florian Buettner
Many common approaches to solve the collaborative filtering task are based on learning representations of users and items, including simple matrix factorization, Gaussian process latent variable models, and neural-network based embeddings.
1 code implementation • 24 Feb 2021 • Christian Tomani, Daniel Cremers, Florian Buettner
We address the problem of uncertainty calibration and introduce a novel calibration method, Parametrized Temperature Scaling (PTS).
no code implementations • 23 Jan 2021 • Xudong Sun, Florian Buettner
We address the task of domain generalization, where the goal is to train a predictive model such that it is able to generalize to a new, previously unseen domain.
1 code implementation • 20 Dec 2020 • Christian Tomani, Florian Buettner
That is, it is crucial for predictive models to be uncertainty-aware and yield well-calibrated (and thus trustworthy) predictions for both in-domain samples as well as under domain shift.
1 code implementation • CVPR 2021 • Christian Tomani, Sebastian Gruber, Muhammed Ebrar Erdem, Daniel Cremers, Florian Buettner
First, we show that existing post-hoc calibration methods yield highly over-confident predictions under domain shift.
1 code implementation • 10 Jul 2020 • Yushan Liu, Markus M. Geipel, Christoph Tietz, Florian Buettner
Diagnosing diseases such as leukemia or anemia requires reliable counts of blood cells.
no code implementations • 15 Jan 2020 • Florian Buettner, John Piorkowski, Ian McCulloh, Ulli Waltinger
To facilitate the widespread acceptance of AI systems guiding decision-making in real-world applications, it is key that solutions comprise trustworthy, integrated human-AI systems.
1 code implementation • ICLR 2019 • Pankaj Gupta, Yatin Chaudhary, Florian Buettner, Hinrich Schütze
We address two challenges of probabilistic topic modelling in order to better estimate the probability of a word in a given context, i. e., P(word|context): (1) No Language Structure in Context: Probabilistic topic models ignore word order by summarizing a given context as a "bag-of-word" and consequently the semantics of words in the context is lost.
1 code implementation • 15 Sep 2018 • Pankaj Gupta, Yatin Chaudhary, Florian Buettner, Hinrich Schütze
Here, we extend a neural autoregressive topic model to exploit the full context information around words in a document in a language modeling fashion.
1 code implementation • 11 Aug 2018 • Pankaj Gupta, Florian Buettner, Hinrich Schütze
Context information around words helps in determining their actual meaning, for example "networks" used in contexts of artificial neural networks or biological neuron networks.