2 code implementations • 28 Jun 2020 • Imtiaz Masud Ziko, Jose Dolz, Eric Granger, Ismail Ben Ayed
Our transductive inference does not re-train the base model, and can be viewed as a graph clustering of the query set, subject to supervision constraints from the support set.
1 code implementation • ECCV 2020 • Malik Boudiaf, Jérôme Rony, Imtiaz Masud Ziko, Eric Granger, Marco Pedersoli, Pablo Piantanida, Ismail Ben Ayed
Second, we show that, more generally, minimizing the cross-entropy is actually equivalent to maximizing the mutual information, to which we connect several well-known pairwise losses.
Ranked #12 on Metric Learning on CARS196 (using extra training data)
1 code implementation • 16 Jun 2021 • Imtiaz Masud Ziko, Malik Boudiaf, Jose Dolz, Eric Granger, Ismail Ben Ayed
Surprisingly, we found that even standard clustering procedures (e. g., K-means), which correspond to particular, non-regularized cases of our general model, already achieve competitive performances in comparison to the state-of-the-art in few-shot learning.
1 code implementation • NeurIPS 2018 • Imtiaz Masud Ziko, Eric Granger, Ismail Ben Ayed
Furthermore, we show that the density modes can be obtained as byproducts of the assignment variables via simple maximum-value operations whose additional computational cost is linear in the number of data points.
1 code implementation • 19 Jun 2019 • Imtiaz Masud Ziko, Eric Granger, Jing Yuan, Ismail Ben Ayed
We derive a general tight upper bound based on a concave-convex decomposition of our fairness term, its Lipschitz-gradient property and the Pinsker's inequality.
no code implementations • 13 Apr 2023 • Imtiaz Masud Ziko, Freddy Lecue, Ismail Ben Ayed
We introduce a simple non-linear embedding adaptation layer, which is fine-tuned on top of fixed pre-trained features for one-shot tasks, improving significantly transductive entropy-based inference for low-shot regimes.