1 code implementation • ACL 2022 • Tassilo Klein, Moin Nabi
In this paper, we propose Self-Contrastive Decorrelation (SCD), a self-supervised approach.
no code implementations • 16 Jan 2024 • Tassilo Klein, Moin Nabi
Optimizing the training objective entails aligning text perplexities in a contrastive fashion.
2 code implementations • 9 Nov 2022 • Tassilo Klein, Moin Nabi
This study opens up avenues for efficient self-supervised learning methods that are more robust than current contrastive methods for sentence embedding.
no code implementations • 11 Apr 2022 • Jannik Wolff, Tassilo Klein, Moin Nabi, Rahul G. Krishnan, Shinichi Nakajima
Machine learning systems are often deployed in domains that entail data from multiple modalities, for example, phenotypic and genotypic characteristics describe patients in healthcare.
1 code implementation • 15 Mar 2022 • Tassilo Klein, Moin Nabi
In this paper, we propose Self-Contrastive Decorrelation (SCD), a self-supervised approach.
no code implementations • 29 Sep 2021 • Jannik Wolff, Rahul G Krishnan, Lukas Ruff, Jan Nikolas Morshuis, Tassilo Klein, Shinichi Nakajima, Moin Nabi
Humans find structure in natural phenomena by absorbing stimuli from multiple input sources such as vision, text, and speech.
1 code implementation • Findings (EMNLP) 2021 • Tassilo Klein, Moin Nabi
Self-supervised learning has recently attracted considerable attention in the NLP community for its ability to learn discriminative features using a contrastive objective.
1 code implementation • EMNLP 2021 • Tassilo Klein, Moin Nabi
Can we get existing language models and refine them for zero-shot commonsense reasoning?
no code implementations • 1 Jan 2021 • Tassilo Klein, Moin Nabi
Specifically, we propose focal entropy - a variant of entropy embedded in an adversarial representation learning setting to leverage privacy sanitization.
no code implementations • 17 Nov 2020 • Frederik Pahde, Mihai Puscas, Tassilo Klein, Moin Nabi
Although providing exceptional results for many computer vision tasks, state-of-the-art deep learning algorithms catastrophically struggle in low data scenarios.
no code implementations • 22 Oct 2020 • Colin Samplawski, Jannik Wolff, Tassilo Klein, Moin Nabi
The task of zero-shot learning (ZSL) requires correctly predicting the label of samples from classes which were unseen at training time.
3 code implementations • ACL 2020 • Tassilo Klein, Moin Nabi
We achieve such commonsense reasoning by constructing pair-wise contrastive auxiliary predictions.
no code implementations • 11 Dec 2019 • Aiham Taleb, Christoph Lippert, Tassilo Klein, Moin Nabi
We introduce the multimodal puzzle task, which facilitates rich representation learning from multiple image modalities.
no code implementations • 30 Nov 2019 • Abdullah Salama, Oleksiy Ostapenko, Tassilo Klein, Moin Nabi
We prove the viability of our method by producing highly compressed models, namely VGG-16, ResNet-56, and ResNet-110 respectively on CIFAR10 without losing any performance compared to the baseline, as well as ResNet-34 and ResNet-50 on ImageNet without a significant loss of accuracy.
no code implementations • 6 Nov 2019 • Tassilo Klein, Moin Nabi
In this work, we propose a variant of the self-attention Transformer network architectures model to generate meaningful and diverse questions.
no code implementations • 25 Sep 2019 • Tassilo Klein, Moin Nabi
The proposed approach deals with the setting where the private features are not explicit, and is estimated though the course of learning.
2 code implementations • ACL 2019 • Tassilo Klein, Moin Nabi
The recently introduced BERT model exhibits strong performance on several language understanding benchmarks.
Ranked #7 on Natural Language Understanding on PDP60
no code implementations • ICCV 2019 • Rodrigo Berriel, Stéphane Lathuilière, Moin Nabi, Tassilo Klein, Thiago Oliveira-Santos, Nicu Sebe, Elisa Ricci
To implement this idea we derive specialized deep models for each domain by adapting a pre-trained architecture but, differently from other methods, we propose a novel strategy to automatically adjust the computational complexity of the network.
no code implementations • ICLR 2019 • Oleksiy Ostapenko, Mihai Puscas, Tassilo Klein, Moin Nabi
Continuously trainable models should be able to learn from a stream of data over an undefined period of time.
2 code implementations • CVPR 2019 • Oleksiy Ostapenko, Mihai Puscas, Tassilo Klein, Patrick Jähnichen, Moin Nabi
In order to tackle these challenges, we introduce Dynamic Generative Memory (DGM) - a synaptic plasticity driven framework for continual learning.
Ranked #4 on Continual Learning on ImageNet-50 (5 tasks)
no code implementations • 4 Jan 2019 • Frederik Pahde, Mihai Puscas, Jannik Wolff, Tassilo Klein, Nicu Sebe, Moin Nabi
Since the advent of deep learning, neural networks have demonstrated remarkable results in many visual recognition tasks, constantly pushing the limits.
no code implementations • 22 Nov 2018 • Frederik Pahde, Oleksiy Ostapenko, Patrick Jähnichen, Tassilo Klein, Moin Nabi
State-of-the-art deep learning algorithms yield remarkable results in many visual recognition tasks.
no code implementations • NIPS Workshop CDNNRIA 2018 • Abdullah Salama, Oleksiy Ostapenko, Moin Nabi, Tassilo Klein
High performance of deep learning models typically comes at cost of considerable model size and computation time.
no code implementations • 12 Sep 2018 • Shailza Jolly, Sandro Pezzelle, Tassilo Klein, Andreas Dengel, Moin Nabi
We show that our metric is effective in providing a more fine-grained evaluation both on the quantitative and qualitative level.
no code implementations • 13 Jun 2018 • Frederik Pahde, Patrick Jähnichen, Tassilo Klein, Moin Nabi
State-of-the-art deep learning algorithms generally require large amounts of data for model training.
no code implementations • 4 Apr 2018 • Benjamin Gutierrez Becker, Tassilo Klein, Christian Wachinger
Finally, we illustrate differences in the disease pattern to normal aging, supporting the application of uncertainty as a measure of neuropathology.
5 code implementations • ICLR 2019 • Robin C. Geyer, Tassilo Klein, Moin Nabi
In such an attack, a client's contribution during training and information about their data set is revealed through analyzing the distributed model.
no code implementations • 23 May 2017 • Benjamín Gutiérrez, Loïc Peter, Tassilo Klein, Christian Wachinger
With the availability of big medical image data, the selection of an adequate training set is becoming more important to address the heterogeneity of different datasets.
no code implementations • 27 Feb 2017 • Christian Wachinger, Martin Reuter, Tassilo Klein
We introduce DeepNAT, a 3D Deep convolutional neural network for the automatic segmentation of NeuroAnaTomy in T1-weighted magnetic resonance images.