1 code implementation • ECCV 2018 • Matthias Müller, Adel Bibi, Silvio Giancola, Salman Al-Subaihi, Bernard Ghanem
In this work, we present TrackingNet, the first large-scale dataset and benchmark for object tracking in the wild.
1 code implementation • 24 Jul 2019 • Adel Bibi, Ali Alqahtani, Bernard Ghanem
Extensive experiments on both synthetic and real data demonstrate when: (1) utilizing a single category of constraint, the proposed model is superior to or competitive with SOTA constrained clustering models, and (2) utilizing both categories of constraints jointly, the proposed model shows better performance than the case of the single category.
1 code implementation • 2 Feb 2024 • Hasan Abed Al Kader Hammoud, Hani Itani, Fabio Pizzati, Philip Torr, Adel Bibi, Bernard Ghanem
We present SynthCLIP, a novel framework for training CLIP models with entirely synthetic text-image pairs, significantly departing from previous methods relying on real data.
1 code implementation • ECCV 2020 • Juan C. Pérez, Motasem Alfarra, Guillaume Jeanneret, Adel Bibi, Ali Thabet, Bernard Ghanem, Pablo Arbeláez
We revisit the benefits of merging classical vision concepts with deep learning models.
1 code implementation • 2 Feb 2022 • Pau de Jorge, Adel Bibi, Riccardo Volpi, Amartya Sanyal, Philip H. S. Torr, Grégory Rogez, Puneet K. Dokania
Recently, Wong et al. showed that adversarial training with single-step FGSM leads to a characteristic failure mode named Catastrophic Overfitting (CO), in which a model becomes suddenly vulnerable to multi-step attacks.
1 code implementation • 13 Jun 2020 • Motasem Alfarra, Juan C. Pérez, Adel Bibi, Ali Thabet, Pablo Arbeláez, Bernard Ghanem
This paper studies how encouraging semantically-aligned features during deep neural network training can increase network robustness.
1 code implementation • CVPR 2023 • Ameya Prabhu, Hasan Abed Al Kader Hammoud, Puneet Dokania, Philip H. S. Torr, Ser-Nam Lim, Bernard Ghanem, Adel Bibi
Our conclusions are consistent in a different number of stream time steps, e. g., 20 to 200, and under several computational budgets.
1 code implementation • 7 Feb 2024 • Chengxing Xie, Canyu Chen, Feiran Jia, Ziyu Ye, Kai Shu, Adel Bibi, Ziniu Hu, Philip Torr, Bernard Ghanem, Guohao Li
In addition, we probe into the biases in agent trust and the differences in agent trust towards agents and humans.
1 code implementation • CVPR 2023 • Yasir Ghunaim, Adel Bibi, Kumail Alhamoud, Motasem Alfarra, Hasan Abed Al Kader Hammoud, Ameya Prabhu, Philip H. S. Torr, Bernard Ghanem
We show that a simple baseline outperforms state-of-the-art CL methods under this evaluation, questioning the applicability of existing methods in realistic settings.
1 code implementation • ICML Workshop AML 2021 • Motasem Alfarra, Juan C. Pérez, Ali Thabet, Adel Bibi, Philip H. S. Torr, Bernard Ghanem
Deep neural networks are vulnerable to small input perturbations known as adversarial attacks.
2 code implementations • 2 Jul 2021 • Motasem Alfarra, Adel Bibi, Naeemullah Khan, Philip H. S. Torr, Bernard Ghanem
Deep neural networks are vulnerable to input deformations in the form of vector fields of pixel displacements and to other parameterized geometric deformations e. g. translations, rotations, etc.
1 code implementation • 9 Jul 2021 • Francisco Eiras, Motasem Alfarra, M. Pawan Kumar, Philip H. S. Torr, Puneet K. Dokania, Bernard Ghanem, Adel Bibi
Randomized smoothing has recently emerged as an effective tool that enables certification of deep neural network classifiers at scale.
1 code implementation • ICCV 2023 • Hasan Abed Al Kader Hammoud, Ameya Prabhu, Ser-Nam Lim, Philip H. S. Torr, Adel Bibi, Bernard Ghanem
We revisit the common practice of evaluating adaptation of Online Continual Learning (OCL) algorithms through the metric of online accuracy, which measures the accuracy of the model on the immediate next few samples.
1 code implementation • 20 Mar 2024 • Hasan Abed Al Kader Hammoud, Tuhin Das, Fabio Pizzati, Philip Torr, Adel Bibi, Bernard Ghanem
We explore the impact of training with more diverse datasets, characterized by the number of unique samples, on the performance of self-supervised learning (SSL) under a fixed computational budget.
1 code implementation • 16 Jun 2022 • Guillermo Ortiz-Jiménez, Pau de Jorge, Amartya Sanyal, Adel Bibi, Puneet K. Dokania, Pascal Frossard, Gregory Rogéz, Philip H. S. Torr
Through extensive experiments we analyze this novel phenomenon and discover that the presence of these easy features induces a learning shortcut that leads to CO. Our findings provide new insights into the mechanisms of CO and improve our understanding of the dynamics of AT.
1 code implementation • 24 Apr 2019 • Modar Alfadly, Adel Bibi, Bernard Ghanem
Despite the impressive performance of deep neural networks (DNNs) on numerous vision tasks, they still exhibit yet-to-understand uncouth behaviours.
1 code implementation • 29 Feb 2024 • Ameya Prabhu, Vishaal Udandarao, Philip Torr, Matthias Bethge, Adel Bibi, Samuel Albanie
However, with repeated testing, the risk of overfitting grows as algorithms over-exploit benchmark idiosyncrasies.
2 code implementations • 28 May 2019 • Salman Al-Subaihi, Adel Bibi, Modar Alfadly, Abdullah Hamdi, Bernard Ghanem
In this paper, we closely examine the bounds of a block of layers composed in the form of Affine-ReLU-Affine.
1 code implementation • NeurIPS 2023 • Aleksandar Petrov, Emanuele La Malfa, Philip H. S. Torr, Adel Bibi
Recent language models have shown impressive multilingual performance, even when not explicitly trained for it.
1 code implementation • 30 Oct 2023 • Aleksandar Petrov, Philip H. S. Torr, Adel Bibi
Context-based fine-tuning methods, including prompting, in-context learning, soft prompting (also known as prompt tuning), and prefix-tuning, have gained popularity due to their ability to often match the performance of full fine-tuning with a fraction of the parameters.
no code implementations • CVPR 2018 • Adel Bibi, Modar Alfadly, Bernard Ghanem
Moreover, we show how these expressions can be used to systematically construct targeted and non-targeted adversarial attacks.
no code implementations • ICLR 2019 • Adel Bibi, Bernard Ghanem, Vladlen Koltun, Rene Ranftl
In particular, we show that a forward pass through a standard dropout layer followed by a linear layer and a non-linear activation is equivalent to optimizing a convex optimization objective with a single iteration of a $\tau$-nice Proximal Stochastic Gradient method.
no code implementations • CVPR 2016 • Adel Bibi, Tianzhu Zhang, Bernard Ghanem
In this paper, we present a part-based sparse tracker in a particle filter framework where both the motion and appearance model are formulated in 3D.
no code implementations • CVPR 2016 • Tianzhu Zhang, Adel Bibi, Bernard Ghanem
Sparse representation has been introduced to visual tracking by finding the best target candidate with minimal reconstruction error within the particle filter framework.
no code implementations • CVPR 2017 • Adel Bibi, Hani Itani, Bernard Ghanem
Since all operations in our FFTLasso method are element-wise, the subproblems are completely independent and can be trivially parallelized (e. g. on a GPU).
no code implementations • ICCV 2017 • Adel Bibi, Bernard Ghanem
Convolutional sparse coding (CSC) has gained attention for its successful role as a reconstruction and a classification tool in the computer vision and machine learning community.
no code implementations • ICLR 2020 • Modar Alfadly, Adel Bibi, Muhammed Kocabas, Bernard Ghanem
In this work, we propose a new training regularizer that aims to minimize the probabilistic expected training loss of a DNN subject to a generic Gaussian input.
no code implementations • 20 Feb 2020 • Motasem Alfarra, Adel Bibi, Hasan Hammoud, Mohamed Gaafar, Bernard Ghanem
Our main finding is that the decision boundaries are a subset of a tropical hypersurface, which is intimately related to a polytope formed by the convex hull of two zonotopes.
no code implementations • 21 Jun 2020 • Modar Alfadly, Adel Bibi, Emilio Botero, Salman AlSubaihi, Bernard Ghanem
This has incited research on the reaction of DNNs to noisy input, namely developing adversarial input attacks and strategies that lead to robust DNNs to these attacks.
no code implementations • 1 Jan 2021 • Motasem Alfarra, Adel Bibi, Hasan Abed Al Kader Hammoud, Mohamed Gaafar, Bernard Ghanem
This work tackles the problem of characterizing and understanding the decision boundaries of neural networks with piecewise linear non-linearity activations.
no code implementations • 8 Dec 2020 • Motasem Alfarra, Adel Bibi, Philip H. S. Torr, Bernard Ghanem
In this work, we revisit Gaussian randomized smoothing and show that the variance of the Gaussian distribution can be optimized at each input so as to maximize the certification radius for the construction of the smooth classifier.
no code implementations • 29 Sep 2021 • Pau de Jorge, Adel Bibi, Riccardo Volpi, Amartya Sanyal, Philip Torr, Grégory Rogez, Puneet K. Dokania
In this work, we methodically revisit the role of noise and clipping in single-step adversarial training.
no code implementations • 25 Sep 2019 • Motasem Alfarra, Adel Bibi, Hasan Hammoud, Mohamed Gaafar, Bernard Ghanem
We use tropical geometry, a new development in the area of algebraic geometry, to provide a characterization of the decision boundaries of a simple neural network of the form (Affine, ReLU, Affine).
no code implementations • 25 Sep 2019 • Salman AlSubaihi, Adel Bibi, Modar Alfadly, Abdullah Hamdi, Bernard Ghanem
al. that bounded input intervals can be inexpensively propagated from layer to layer through deep networks.
no code implementations • 20 Jul 2022 • Tim Franzmeyer, Stephen Mcaleer, João F. Henriques, Jakob N. Foerster, Philip H. S. Torr, Adel Bibi, Christian Schroeder de Witt
Autonomous agents deployed in the real world need to be robust against adversarial attacks on sensory inputs.
no code implementations • 26 Sep 2022 • Botos Csaba, Adel Bibi, Yanwei Li, Philip Torr, Ser-Nam Lim
Deep learning models for vision tasks are trained on large datasets under the assumption that there exists a universal representation that can be used to make predictions for all samples.
no code implementations • 29 Nov 2022 • Motasem Alfarra, Zhipeng Cai, Adel Bibi, Bernard Ghanem, Matthias Müller
This work explores the problem of Online Domain-Incremental Continual Segmentation (ODICS), where the model is continually trained over batches of densely labeled images from different domains, with limited computation and no information about the task boundaries.
no code implementations • 23 Mar 2023 • Hasan Abed Al Kader Hammoud, Adel Bibi, Philip H. S. Torr, Bernard Ghanem
In this paper we investigate the frequency sensitivity of Deep Neural Networks (DNNs) when presented with clean samples versus poisoned samples.
no code implementations • 25 Apr 2023 • Aleksandar Petrov, Francisco Eiras, Amartya Sanyal, Philip H. S. Torr, Adel Bibi
Improving and guaranteeing the robustness of deep learning models has been a topic of intense research.
no code implementations • 17 May 2023 • Francisco Eiras, Adel Bibi, Rudy Bunel, Krishnamurthy Dj Dvijotham, Philip Torr, M. Pawan Kumar
Recent work provides promising evidence that Physics-informed neural networks (PINN) can efficiently solve partial differential equations (PDE).
no code implementations • 20 Oct 2023 • Francisco Eiras, Kemal Oksuz, Adel Bibi, Philip H. S. Torr, Puneet K. Dokania
Referring Image Segmentation (RIS) - the problem of identifying objects in images through natural language sentences - is a challenging task currently mostly solved through supervised learning.
no code implementations • 19 Nov 2023 • Ameya Prabhu, Hasan Abed Al Kader Hammoud, Ser-Nam Lim, Bernard Ghanem, Philip H. S. Torr, Adel Bibi
Continual Learning (CL) often relies on the availability of extensive annotated datasets, an assumption that is unrealistically time-consuming and costly in practice.
no code implementations • 1 Dec 2023 • Botos Csaba, Wenxuan Zhang, Matthias Müller, Ser-Nam Lim, Mohamed Elhoseiny, Philip Torr, Adel Bibi
We introduce a new continual learning framework with explicit modeling of the label delay between data and label streams over time steps.
no code implementations • 22 Feb 2024 • Aleksandar Petrov, Philip H. S. Torr, Adel Bibi
Despite the widespread adoption of prompting, prompt tuning and prefix-tuning of transformer models, our theoretical understanding of these fine-tuning methods remains limited.