1 code implementation • 4 Aug 2024 • Mateusz Ochal, Massimiliano Patacchiola, Malik Boudiaf, Sen Wang
In Few-Shot Learning (FSL), models are trained to recognise unseen objects from a query set, given a few labelled examples from a support set.
no code implementations • 28 Jul 2024 • Pierre Colombo, Telmo Pires, Malik Boudiaf, Rui Melo, Dominic Culver, Sofia Morgado, Etienne Malaboeuf, Gabriel Hautreux, Johanne Charpentier, Michael Desa
In this paper, we introduce SaulLM-54B and SaulLM-141B, two large language models (LLMs) tailored for the legal sector.
1 code implementation • CVPR 2024 • Yunshi Huang, Fereshteh Shakeri, Jose Dolz, Malik Boudiaf, Houda Bahig, Ismail Ben Ayed
In this work, we propose and examine from convex-optimization perspectives a generalization of the standard LP baseline, in which the linear classifier weights are learnable functions of the text embedding, with class-wise multipliers blending image and text knowledge.
no code implementations • 6 Mar 2024 • Pierre Colombo, Telmo Pessoa Pires, Malik Boudiaf, Dominic Culver, Rui Melo, Caio Corro, Andre F. T. Martins, Fabrizio Esposito, Vera Lúcia Raposo, Sofia Morgado, Michael Desa
In this paper, we introduce SaulLM-7B, a large language model (LLM) tailored for the legal domain.
no code implementations • 21 Oct 2023 • Pierre Colombo, Victor Pellegrain, Malik Boudiaf, Victor Storchan, Myriam Tami, Ismail Ben Ayed, Celine Hudelot, Pablo Piantanida
First, we introduce a scenario where the embedding of a pre-trained model is served through a gated API with compute-cost and data-privacy constraints.
1 code implementation • 3 Oct 2023 • Saypraseuth Mounsaveng, Florent Chiaroni, Malik Boudiaf, Marco Pedersoli, Ismail Ben Ayed
Fully Test-Time Adaptation (TTA), which aims at adapting models to data drifts, has recently attracted wide interest.
no code implementations • 13 Feb 2023 • Malik Boudiaf, Tom Denton, Bart van Merriënboer, Vincent Dumoulin, Eleni Triantafillou
Source-free domain adaptation (SFDA) is compelling because it allows adapting an off-the-shelf model to a new domain using only unlabelled data.
1 code implementation • CVPR 2023 • Malik Boudiaf, Etienne Bennequin, Myriam Tami, Antoine Toubhans, Pablo Piantanida, Céline Hudelot, Ismail Ben Ayed
We tackle the Few-Shot Open-Set Recognition (FSOSR) problem, i. e. classifying instances among a set of classes for which we only have a few labeled samples, while simultaneously detecting instances that do not belong to any known class.
2 code implementations • CVPR 2023 • Sina Hajimiri, Malik Boudiaf, Ismail Ben Ayed, Jose Dolz
In addition, the terms derived from our MI-based formulation are coupled with a knowledge distillation term to retain the knowledge on base classes.
1 code implementation • 26 Oct 2022 • Ségolène Martin, Malik Boudiaf, Emilie Chouzenoux, Jean-Christophe Pesquet, Ismail Ben Ayed
We relax these assumptions and extend current benchmarks, so that the query-set classes of a given task are unknown, but just belong to a much larger set of possible classes.
1 code implementation • 30 Jul 2022 • Florent Chiaroni, Malik Boudiaf, Amar Mitiche, Ismail Ben Ayed
We explore clustering the softmax predictions of deep neural networks and introduce a novel probabilistic clustering method, referred to as k-sBetas.
1 code implementation • 18 Jun 2022 • Malik Boudiaf, Etienne Bennequin, Myriam Tami, Celine Hudelot, Antoine Toubhans, Pablo Piantanida, Ismail Ben Ayed
Through extensive experiments spanning 5 datasets, we show that OSTIM surpasses both inductive and existing transductive methods in detecting open-set instances while competing with the strongest transductive methods in classifying closed-set instances.
no code implementations • 31 May 2022 • Fereshteh Shakeri, Malik Boudiaf, Sina Mohammadi, Ivaxi Sheth, Mohammad Havaei, Ismail Ben Ayed, Samira Ebrahimi Kahou
We build few-shot tasks and base-training data with various tissue types, different levels of domain shifts stemming from various cancer sites, and different class-granularity levels, thereby reflecting realistic scenarios.
1 code implementation • NeurIPS 2021 • Olivier Veilleux, Malik Boudiaf, Pablo Piantanida, Ismail Ben Ayed
Transductive inference is widely used in few-shot learning, as it leverages the statistics of the unlabeled query set of a few-shot task, typically yielding substantially better performances than its inductive counterpart.
1 code implementation • 14 Feb 2022 • Georg Pichler, Pierre Colombo, Malik Boudiaf, Günther Koliander, Pablo Piantanida
Mutual Information (MI) has been widely used as a loss regularizer for training neural networks.
1 code implementation • CVPR 2022 • Malik Boudiaf, Romain Mueller, Ismail Ben Ayed, Luca Bertinetto
An interesting and practical paradigm is online test-time adaptation, according to which training data is inaccessible, no labelled data from the test distribution is available, and adaptation can only happen at test time and on a handful of samples.
3 code implementations • 23 Jun 2021 • Malik Boudiaf, Ziko Imtiaz Masud, Jérôme Rony, Jose Dolz, Ismail Ben Ayed, Pablo Piantanida
We motivate our transductive loss by deriving a formal relation between the classification accuracy and mutual-information maximization.
1 code implementation • 16 Jun 2021 • Imtiaz Masud Ziko, Malik Boudiaf, Jose Dolz, Eric Granger, Ismail Ben Ayed
Surprisingly, we found that even standard clustering procedures (e. g., K-means), which correspond to particular, non-regularized cases of our general model, already achieve competitive performances in comparison to the state-of-the-art in few-shot learning.
1 code implementation • 12 Jun 2021 • Marine Picot, Francisco Messina, Malik Boudiaf, Fabrice Labeau, Ismail Ben Ayed, Pablo Piantanida
Adversarial robustness has become a topic of growing interest in machine learning since it was observed that neural networks tend to be brittle.
2 code implementations • CVPR 2021 • Malik Boudiaf, Hoel Kervadec, Ziko Imtiaz Masud, Pablo Piantanida, Ismail Ben Ayed, Jose Dolz
We show that the way inference is performed in few-shot segmentation tasks has a substantial effect on performances -- an aspect often overlooked in the literature in favor of the meta-learning paradigm.
Ranked #3 on Few-Shot Semantic Segmentation on COCO-20i (10-shot)
1 code implementation • NeurIPS 2020 • Malik Boudiaf, Imtiaz Ziko, Jérôme Rony, Jose Dolz, Pablo Piantanida, Ismail Ben Ayed
We introduce Transductive Infomation Maximization (TIM) for few-shot learning.
2 code implementations • 25 Aug 2020 • Malik Boudiaf, Ziko Imtiaz Masud, Jérôme Rony, José Dolz, Pablo Piantanida, Ismail Ben Ayed
We introduce Transductive Infomation Maximization (TIM) for few-shot learning.
1 code implementation • ECCV 2020 • Malik Boudiaf, Jérôme Rony, Imtiaz Masud Ziko, Eric Granger, Marco Pedersoli, Pablo Piantanida, Ismail Ben Ayed
Second, we show that, more generally, minimizing the cross-entropy is actually equivalent to maximizing the mutual information, to which we connect several well-known pairwise losses.
Ranked #12 on Metric Learning on CARS196 (using extra training data)