no code implementations • 19 Jul 2023 • Boris Flach, Dmitrij Schlesinger, Alexander Shekhovtsov
We propose a Nash equilibrium learning approach that relaxes these restrictions and allows learning VAEs in situations where both the data and the latent distributions are accessible only by sampling.
1 code implementation • 7 Jul 2022 • Tobias Hänel, Nishant Kumar, Dmitrij Schlesinger, Mengze Li, Erdem Ünal, Abouzar Eslami, Stefan Gumhold
The performance of deep neural networks for image recognition tasks such as predicting a smiling face is known to degrade with under-represented classes of sensitive attributes.
no code implementations • ICLR 2022 • Alexander Shekhovtsov, Dmitrij Schlesinger, Boris Flach
The importance of Variational Autoencoders reaches far beyond standalone generative models -- the approach is also used for learning latent representations and can be generalized to semi-supervised learning.
no code implementations • 30 Oct 2017 • Dmitrij Schlesinger
This paper establishes a new connection between FFNs and GMs.
no code implementations • 21 Feb 2017 • Dmitrij Schlesinger, Florian Jug, Gene Myers, Carsten Rother, Dagmar Kainmüller
In an evaluation on a light microscopy dataset containing more than 5000 membrane labeled epithelial cells of a fly wing, we show that iaSTAPLE outperforms STAPLE in terms of segmentation accuracy as well as in terms of the accuracy of estimated crowd worker performance levels, and is able to correctly segment 99% of all cells when compared to expert segmentations.
no code implementations • 5 Dec 2016 • Dmitrij Schlesinger, Carsten Rother
We propose a new modeling approach that is a generalization of generative and discriminative models.
no code implementations • ICCV 2015 • Alexander Kirillov, Bogdan Savchynskyy, Dmitrij Schlesinger, Dmitry Vetrov, Carsten Rother
We consider the task of finding M-best diverse solutions in a graphical model.
no code implementations • 16 Nov 2015 • Alexander Kirillov, Dmitrij Schlesinger, Shuai Zheng, Bogdan Savchynskyy, Philip H. S. Torr, Carsten Rother
We propose a new CNN-CRF end-to-end learning framework, which is based on joint stochastic optimization with respect to both Convolutional Neural Network (CNN) and Conditional Random Field (CRF) parameters.