1 code implementation • ACL 2022 • Tassilo Klein, Moin Nabi
In this paper, we propose Self-Contrastive Decorrelation (SCD), a self-supervised approach.
1 code implementation • 21 Nov 2024 • Enrico Fini, Mustafa Shukor, Xiujun Li, Philipp Dufter, Michal Klein, David Haldimann, Sai Aitharaju, Victor Guilherme Turrisi da Costa, Louis Béthune, Zhe Gan, Alexander T Toshev, Marcin Eichner, Moin Nabi, Yinfei Yang, Joshua M. Susskind, Alaaeldin El-Nouby
We introduce a novel method for pre-training of large-scale vision encoders.
Ranked #1 on
Image Classification
on iNaturalist
no code implementations • 4 Nov 2024 • Atoosa Chegini, Hamid Kazemi, Iman Mirzadeh, Dong Yin, Maxwell Horton, Moin Nabi, Mehrdad Farajtabar, Keivan Alizadeh
As a result, policy optimization is often trapped in a narrow region of the parameter space, leading to suboptimal alignment and performance.
no code implementations • 25 Oct 2024 • Saleh Ashkboos, Iman Mirzadeh, Keivan Alizadeh, Mohammad Hossein Sekhavat, Moin Nabi, Mehrdad Farajtabar, Fartash Faghri
While large language models (LLMs) dominate the AI landscape, Small-scale large Language Models (SLMs) are gaining attention due to cost and efficiency demands from consumers.
1 code implementation • 10 Oct 2024 • Maxwell Horton, Qingqing Cao, Chenfan Sun, Yanzi Jin, Sachin Mehta, Mohammad Rastegari, Moin Nabi
In our method, a small auxiliary model is used to process the prompt and produce an approximation of the KV cache used by a base model.
no code implementations • 10 Oct 2024 • Aryo Lotfi, Enrico Fini, Samy Bengio, Moin Nabi, Emmanuel Abbe
Modern vision models have achieved remarkable success in benchmarks where local features provide critical information about the target.
no code implementations • 1 Oct 2024 • Keivan Alizadeh, Iman Mirzadeh, Hooman Shahrokhi, Dmitry Belenko, Frank Sun, Minsik Cho, Mohammad Hossein Sekhavat, Moin Nabi, Mehrdad Farajtabar
Large Language Models (LLMs) typically generate outputs token by token using a fixed compute budget, leading to inefficient resource utilization.
1 code implementation • 19 Sep 2024 • Mohammad Samragh, Iman Mirzadeh, Keivan Alizadeh Vahid, Fartash Faghri, Minsik Cho, Moin Nabi, Devang Naik, Mehrdad Farajtabar
In this paper, we introduce HyperCloning, a method that can expand the parameters of a pre-trained language model to those of a larger model with increased hidden dimensions.
no code implementations • 16 Jan 2024 • Tassilo Klein, Moin Nabi
Optimizing the training objective entails aligning text perplexities in a contrastive fashion.
1 code implementation • CVPR 2023 • Enrico Fini, Pietro Astolfi, Karteek Alahari, Xavier Alameda-Pineda, Julien Mairal, Moin Nabi, Elisa Ricci
Self-supervised learning models have been shown to learn rich visual representations without requiring human annotations.
2 code implementations • ICCV 2023 • Zhiqi Kang, Enrico Fini, Moin Nabi, Elisa Ricci, Karteek Alahari
Despite significant advances, the performance of state-of-the-art continual learning approaches hinges on the unrealistic scenario of fully labeled data.
2 code implementations • 9 Nov 2022 • Tassilo Klein, Moin Nabi
This study opens up avenues for efficient self-supervised learning methods that are more robust than current contrastive methods for sentence embedding.
no code implementations • 11 Apr 2022 • Jannik Wolff, Tassilo Klein, Moin Nabi, Rahul G. Krishnan, Shinichi Nakajima
Machine learning systems are often deployed in domains that entail data from multiple modalities, for example, phenotypic and genotypic characteristics describe patients in healthcare.
1 code implementation • 26 Mar 2022 • Guanglei Yang, Enrico Fini, Dan Xu, Paolo Rota, Mingli Ding, Moin Nabi, Xavier Alameda-Pineda, Elisa Ricci
This problem has been widely investigated in the research community and several Incremental Learning (IL) approaches have been proposed in the past years.
1 code implementation • 15 Mar 2022 • Tassilo Klein, Moin Nabi
In this paper, we propose Self-Contrastive Decorrelation (SCD), a self-supervised approach.
no code implementations • 29 Sep 2021 • Jannik Wolff, Rahul G Krishnan, Lukas Ruff, Jan Nikolas Morshuis, Tassilo Klein, Shinichi Nakajima, Moin Nabi
Humans find structure in natural phenomena by absorbing stimuli from multiple input sources such as vision, text, and speech.
1 code implementation • Findings (EMNLP) 2021 • Tassilo Klein, Moin Nabi
Self-supervised learning has recently attracted considerable attention in the NLP community for its ability to learn discriminative features using a contrastive objective.
1 code implementation • EMNLP 2021 • Tassilo Klein, Moin Nabi
Can we get existing language models and refine them for zero-shot commonsense reasoning?
1 code implementation • ICCV 2021 • Enrico Fini, Enver Sangineto, Stéphane Lathuilière, Zhun Zhong, Moin Nabi, Elisa Ricci
In this paper, we study the problem of Novel Class Discovery (NCD).
Ranked #3 on
Novel Object Detection
on LVIS v1.0 val
4 code implementations • 3 Aug 2021 • Victor G. Turrisi da Costa, Enrico Fini, Moin Nabi, Nicu Sebe, Elisa Ricci
This paper presents solo-learn, a library of self-supervised methods for visual representation learning.
1 code implementation • NAACL 2021 • Shailza Jolly, Sandro Pezzelle, Moin Nabi
We propose EASE, a simple diagnostic tool for Visual Question Answering (VQA) which quantifies the difficulty of an image, question sample.
no code implementations • 1 Jan 2021 • Tassilo Klein, Moin Nabi
Specifically, we propose focal entropy - a variant of entropy embedded in an adversarial representation learning setting to leverage privacy sanitization.
no code implementations • 17 Nov 2020 • Frederik Pahde, Mihai Puscas, Tassilo Klein, Moin Nabi
Although providing exceptional results for many computer vision tasks, state-of-the-art deep learning algorithms catastrophically struggle in low data scenarios.
no code implementations • 22 Oct 2020 • Colin Samplawski, Jannik Wolff, Tassilo Klein, Moin Nabi
The task of zero-shot learning (ZSL) requires correctly predicting the label of samples from classes which were unseen at training time.
1 code implementation • ECCV 2020 • Enrico Fini, Stéphane Lathuilière, Enver Sangineto, Moin Nabi, Elisa Ricci
Continual Learning (CL) aims to develop agents emulating the human ability to sequentially learn new tasks while being able to retain knowledge obtained from past experiences.
3 code implementations • ACL 2020 • Tassilo Klein, Moin Nabi
We achieve such commonsense reasoning by constructing pair-wise contrastive auxiliary predictions.
no code implementations • 11 Dec 2019 • Aiham Taleb, Christoph Lippert, Tassilo Klein, Moin Nabi
We introduce the multimodal puzzle task, which facilitates rich representation learning from multiple image modalities.
no code implementations • 30 Nov 2019 • Abdullah Salama, Oleksiy Ostapenko, Tassilo Klein, Moin Nabi
We prove the viability of our method by producing highly compressed models, namely VGG-16, ResNet-56, and ResNet-110 respectively on CIFAR10 without losing any performance compared to the baseline, as well as ResNet-34 and ResNet-50 on ImageNet without a significant loss of accuracy.
no code implementations • 6 Nov 2019 • Tassilo Klein, Moin Nabi
In this work, we propose a variant of the self-attention Transformer network architectures model to generate meaningful and diverse questions.
no code implementations • 25 Sep 2019 • Tassilo Klein, Moin Nabi
The proposed approach deals with the setting where the private features are not explicit, and is estimated though the course of learning.
2 code implementations • ACL 2019 • Tassilo Klein, Moin Nabi
The recently introduced BERT model exhibits strong performance on several language understanding benchmarks.
Ranked #7 on
Natural Language Understanding
on PDP60
no code implementations • ICCV 2019 • Rodrigo Berriel, Stéphane Lathuilière, Moin Nabi, Tassilo Klein, Thiago Oliveira-Santos, Nicu Sebe, Elisa Ricci
To implement this idea we derive specialized deep models for each domain by adapting a pre-trained architecture but, differently from other methods, we propose a novel strategy to automatically adjust the computational complexity of the network.
no code implementations • ICLR 2019 • Oleksiy Ostapenko, Mihai Puscas, Tassilo Klein, Moin Nabi
Continuously trainable models should be able to learn from a stream of data over an undefined period of time.
2 code implementations • CVPR 2019 • Oleksiy Ostapenko, Mihai Puscas, Tassilo Klein, Patrick Jähnichen, Moin Nabi
In order to tackle these challenges, we introduce Dynamic Generative Memory (DGM) - a synaptic plasticity driven framework for continual learning.
Ranked #4 on
Continual Learning
on ImageNet-50 (5 tasks)
no code implementations • 4 Jan 2019 • Frederik Pahde, Mihai Puscas, Jannik Wolff, Tassilo Klein, Nicu Sebe, Moin Nabi
Since the advent of deep learning, neural networks have demonstrated remarkable results in many visual recognition tasks, constantly pushing the limits.
no code implementations • 22 Nov 2018 • Frederik Pahde, Oleksiy Ostapenko, Patrick Jähnichen, Tassilo Klein, Moin Nabi
State-of-the-art deep learning algorithms yield remarkable results in many visual recognition tasks.
no code implementations • NIPS Workshop CDNNRIA 2018 • Abdullah Salama, Oleksiy Ostapenko, Moin Nabi, Tassilo Klein
High performance of deep learning models typically comes at cost of considerable model size and computation time.
no code implementations • 12 Sep 2018 • Shailza Jolly, Sandro Pezzelle, Tassilo Klein, Andreas Dengel, Moin Nabi
We show that our metric is effective in providing a more fine-grained evaluation both on the quantitative and qualitative level.
no code implementations • 13 Jun 2018 • Frederik Pahde, Patrick Jähnichen, Tassilo Klein, Moin Nabi
State-of-the-art deep learning algorithms generally require large amounts of data for model training.
6 code implementations • ICLR 2019 • Robin C. Geyer, Tassilo Klein, Moin Nabi
In such an attack, a client's contribution during training and information about their data set is revealed through analyzing the distributed model.
no code implementations • 31 Aug 2017 • Mahdyar Ravanbakhsh, Moin Nabi, Enver Sangineto, Lucio Marcenaro, Carlo Regazzoni, Nicu Sebe
In this paper we address the abnormality detection problem in crowded scenes.
Ranked #4 on
Abnormal Event Detection In Video
on UCSD Ped2
no code implementations • ACL 2017 • Azad Abad, Moin Nabi, Aless Moschitti, ro
In this paper we introduce a self-training strategy for crowdsourcing.
no code implementations • 23 Jun 2017 • Mahdyar Ravanbakhsh, Enver Sangineto, Moin Nabi, Nicu Sebe
Abnormal crowd behaviour detection attracts a large interest due to its importance in video surveillance scenarios.
no code implementations • ACL 2017 • Ravi Shekhar, Sandro Pezzelle, Yauhen Klimovich, Aurelie Herbelot, Moin Nabi, Enver Sangineto, Raffaella Bernardi
In this paper, we aim to understand whether current language and vision (LaVi) models truly grasp the interaction between the two modalities.
no code implementations • 21 Nov 2016 • Mahdyar Ravanbakhsh, Hossein Mousavi, Moin Nabi, Lucio Marcenaro, Carlo Regazzoni
We use binary encoding of CNN features to overcome the difficulty of the clustering on the high-dimensional CNN feature space.
no code implementations • 2 Oct 2016 • Mahdyar Ravanbakhsh, Moin Nabi, Hossein Mousavi, Enver Sangineto, Nicu Sebe
In this paper, we show that keeping track of the changes in the CNN feature across time can facilitate capturing the local abnormality.
no code implementations • 29 Sep 2016 • Mahdyar Ravanbakhsh, Hossein Mousavi, Moin Nabi, Mohammad Rastegari, Carlo Regazzoni
To the best of our knowledge our method is the first attempt on general semantic image segmentation using CNN.
no code implementations • 26 Jul 2016 • Hamidreza Rabiee, Javad Haddadnia, Hossein Mousavi, Moin Nabi, Vittorio Murino, Nicu Sebe
We aim at publishing the dataset with the article, to be used as a benchmark for the communities.
1 code implementation • 24 May 2016 • Enver Sangineto, Moin Nabi, Dubravko Culibrk, Nicu Sebe
The main idea is to iteratively select a subset of images and boxes that are the most reliable, and use them for training.
Ranked #37 on
Weakly Supervised Object Detection
on PASCAL VOC 2007
no code implementations • 23 Dec 2015 • Moin Nabi
We investigate on discovering and learning a set of mid-level patches to be used for representing the images of an object category.
no code implementations • CVPR 2015 • Dimitris Stamos, Samuele Martelli, Moin Nabi, Andrew McDonald, Vittorio Murino, Massimiliano Pontil
However, previous work has highlighted the possible danger of simply training a model from the combined datasets, due to the presence of bias.