Search Results for author: Adel Bibi

Found 44 papers, 20 papers with code

On Pretraining Data Diversity for Self-Supervised Learning

1 code implementation20 Mar 2024 Hasan Abed Al Kader Hammoud, Tuhin Das, Fabio Pizzati, Philip Torr, Adel Bibi, Bernard Ghanem

We explore the impact of training with more diverse datasets, characterized by the number of unique samples, on the performance of self-supervised learning (SSL) under a fixed computational budget.

Self-Supervised Learning

Lifelong Benchmarks: Efficient Model Evaluation in an Era of Rapid Progress

1 code implementation29 Feb 2024 Ameya Prabhu, Vishaal Udandarao, Philip Torr, Matthias Bethge, Adel Bibi, Samuel Albanie

However, with repeated testing, the risk of overfitting grows as algorithms over-exploit benchmark idiosyncrasies.

Benchmarking

Prompting a Pretrained Transformer Can Be a Universal Approximator

no code implementations22 Feb 2024 Aleksandar Petrov, Philip H. S. Torr, Adel Bibi

Despite the widespread adoption of prompting, prompt tuning and prefix-tuning of transformer models, our theoretical understanding of these fine-tuning methods remains limited.

SynthCLIP: Are We Ready for a Fully Synthetic CLIP Training?

1 code implementation2 Feb 2024 Hasan Abed Al Kader Hammoud, Hani Itani, Fabio Pizzati, Philip Torr, Adel Bibi, Bernard Ghanem

We present SynthCLIP, a novel framework for training CLIP models with entirely synthetic text-image pairs, significantly departing from previous methods relying on real data.

Label Delay in Continual Learning

no code implementations1 Dec 2023 Botos Csaba, Wenxuan Zhang, Matthias Müller, Ser-Nam Lim, Mohamed Elhoseiny, Philip Torr, Adel Bibi

We introduce a new continual learning framework with explicit modeling of the label delay between data and label streams over time steps.

Continual Learning

From Categories to Classifier: Name-Only Continual Learning by Exploring the Web

no code implementations19 Nov 2023 Ameya Prabhu, Hasan Abed Al Kader Hammoud, Ser-Nam Lim, Bernard Ghanem, Philip H. S. Torr, Adel Bibi

Continual Learning (CL) often relies on the availability of extensive annotated datasets, an assumption that is unrealistically time-consuming and costly in practice.

Continual Learning Image Classification +1

When Do Prompting and Prefix-Tuning Work? A Theory of Capabilities and Limitations

1 code implementation30 Oct 2023 Aleksandar Petrov, Philip H. S. Torr, Adel Bibi

Context-based fine-tuning methods, including prompting, in-context learning, soft prompting (also known as prompt tuning), and prefix-tuning, have gained popularity due to their ability to often match the performance of full fine-tuning with a fraction of the parameters.

In-Context Learning

Segment, Select, Correct: A Framework for Weakly-Supervised Referring Segmentation

no code implementations20 Oct 2023 Francisco Eiras, Kemal Oksuz, Adel Bibi, Philip H. S. Torr, Puneet K. Dokania

Referring Image Segmentation (RIS) - the problem of identifying objects in images through natural language sentences - is a challenging task currently mostly solved through supervised learning.

Image Segmentation Semantic Segmentation +1

Provably Correct Physics-Informed Neural Networks

no code implementations17 May 2023 Francisco Eiras, Adel Bibi, Rudy Bunel, Krishnamurthy Dj Dvijotham, Philip Torr, M. Pawan Kumar

Recent work provides promising evidence that Physics-informed neural networks (PINN) can efficiently solve partial differential equations (PDE).

Rapid Adaptation in Online Continual Learning: Are We Evaluating It Right?

1 code implementation ICCV 2023 Hasan Abed Al Kader Hammoud, Ameya Prabhu, Ser-Nam Lim, Philip H. S. Torr, Adel Bibi, Bernard Ghanem

We revisit the common practice of evaluating adaptation of Online Continual Learning (OCL) algorithms through the metric of online accuracy, which measures the accuracy of the model on the immediate next few samples.

Continual Learning

Certifying Ensembles: A General Certification Theory with S-Lipschitzness

no code implementations25 Apr 2023 Aleksandar Petrov, Francisco Eiras, Amartya Sanyal, Philip H. S. Torr, Adel Bibi

Improving and guaranteeing the robustness of deep learning models has been a topic of intense research.

Don't FREAK Out: A Frequency-Inspired Approach to Detecting Backdoor Poisoned Samples in DNNs

no code implementations23 Mar 2023 Hasan Abed Al Kader Hammoud, Adel Bibi, Philip H. S. Torr, Bernard Ghanem

In this paper we investigate the frequency sensitivity of Deep Neural Networks (DNNs) when presented with clean samples versus poisoned samples.

Computationally Budgeted Continual Learning: What Does Matter?

1 code implementation CVPR 2023 Ameya Prabhu, Hasan Abed Al Kader Hammoud, Puneet Dokania, Philip H. S. Torr, Ser-Nam Lim, Bernard Ghanem, Adel Bibi

Our conclusions are consistent in a different number of stream time steps, e. g., 20 to 200, and under several computational budgets.

Continual Learning

Real-Time Evaluation in Online Continual Learning: A New Hope

1 code implementation CVPR 2023 Yasir Ghunaim, Adel Bibi, Kumail Alhamoud, Motasem Alfarra, Hasan Abed Al Kader Hammoud, Ameya Prabhu, Philip H. S. Torr, Bernard Ghanem

We show that a simple baseline outperforms state-of-the-art CL methods under this evaluation, questioning the applicability of existing methods in realistic settings.

Continual Learning

SimCS: Simulation for Domain Incremental Online Continual Segmentation

no code implementations29 Nov 2022 Motasem Alfarra, Zhipeng Cai, Adel Bibi, Bernard Ghanem, Matthias Müller

This work explores the problem of Online Domain-Incremental Continual Segmentation (ODICS), where the model is continually trained over batches of densely labeled images from different domains, with limited computation and no information about the task boundaries.

Autonomous Driving Continual Learning +2

Diversified Dynamic Routing for Vision Tasks

no code implementations26 Sep 2022 Botos Csaba, Adel Bibi, Yanwei Li, Philip Torr, Ser-Nam Lim

Deep learning models for vision tasks are trained on large datasets under the assumption that there exists a universal representation that can be used to make predictions for all samples.

Instance Segmentation object-detection +2

Catastrophic overfitting can be induced with discriminative non-robust features

1 code implementation16 Jun 2022 Guillermo Ortiz-Jiménez, Pau de Jorge, Amartya Sanyal, Adel Bibi, Puneet K. Dokania, Pascal Frossard, Gregory Rogéz, Philip H. S. Torr

Through extensive experiments we analyze this novel phenomenon and discover that the presence of these easy features induces a learning shortcut that leads to CO. Our findings provide new insights into the mechanisms of CO and improve our understanding of the dynamics of AT.

Robust classification

Make Some Noise: Reliable and Efficient Single-Step Adversarial Training

1 code implementation2 Feb 2022 Pau de Jorge, Adel Bibi, Riccardo Volpi, Amartya Sanyal, Philip H. S. Torr, Grégory Rogez, Puneet K. Dokania

Recently, Wong et al. showed that adversarial training with single-step FGSM leads to a characteristic failure mode named Catastrophic Overfitting (CO), in which a model becomes suddenly vulnerable to multi-step attacks.

Towards fast and effective single-step adversarial training

no code implementations29 Sep 2021 Pau de Jorge, Adel Bibi, Riccardo Volpi, Amartya Sanyal, Philip Torr, Grégory Rogez, Puneet K. Dokania

In this work, we methodically revisit the role of noise and clipping in single-step adversarial training.

ANCER: Anisotropic Certification via Sample-wise Volume Maximization

1 code implementation9 Jul 2021 Francisco Eiras, Motasem Alfarra, M. Pawan Kumar, Philip H. S. Torr, Puneet K. Dokania, Bernard Ghanem, Adel Bibi

Randomized smoothing has recently emerged as an effective tool that enables certification of deep neural network classifiers at scale.

DeformRS: Certifying Input Deformations with Randomized Smoothing

2 code implementations2 Jul 2021 Motasem Alfarra, Adel Bibi, Naeemullah Khan, Philip H. S. Torr, Bernard Ghanem

Deep neural networks are vulnerable to input deformations in the form of vector fields of pixel displacements and to other parameterized geometric deformations e. g. translations, rotations, etc.

On the Decision Boundaries of Neural Networks. A Tropical Geometry Perspective

no code implementations1 Jan 2021 Motasem Alfarra, Adel Bibi, Hasan Abed Al Kader Hammoud, Mohamed Gaafar, Bernard Ghanem

This work tackles the problem of characterizing and understanding the decision boundaries of neural networks with piecewise linear non-linearity activations.

Network Pruning

Data-Dependent Randomized Smoothing

no code implementations8 Dec 2020 Motasem Alfarra, Adel Bibi, Philip H. S. Torr, Bernard Ghanem

In this work, we revisit Gaussian randomized smoothing and show that the variance of the Gaussian distribution can be optimized at each input so as to maximize the certification radius for the construction of the smooth classifier.

Network Moments: Extensions and Sparse-Smooth Attacks

no code implementations21 Jun 2020 Modar Alfadly, Adel Bibi, Emilio Botero, Salman AlSubaihi, Bernard Ghanem

This has incited research on the reaction of DNNs to noisy input, namely developing adversarial input attacks and strategies that lead to robust DNNs to these attacks.

Rethinking Clustering for Robustness

1 code implementation13 Jun 2020 Motasem Alfarra, Juan C. Pérez, Adel Bibi, Ali Thabet, Pablo Arbeláez, Bernard Ghanem

This paper studies how encouraging semantically-aligned features during deep neural network training can increase network robustness.

Clustering

On the Decision Boundaries of Neural Networks: A Tropical Geometry Perspective

no code implementations20 Feb 2020 Motasem Alfarra, Adel Bibi, Hasan Hammoud, Mohamed Gaafar, Bernard Ghanem

Our main finding is that the decision boundaries are a subset of a tropical hypersurface, which is intimately related to a polytope formed by the convex hull of two zonotopes.

Network Pruning

Analytical Moment Regularizer for Training Robust Networks

no code implementations ICLR 2020 Modar Alfadly, Adel Bibi, Muhammed Kocabas, Bernard Ghanem

In this work, we propose a new training regularizer that aims to minimize the probabilistic expected training loss of a DNN subject to a generic Gaussian input.

Data Augmentation

On the Decision Boundaries of Deep Neural Networks: A Tropical Geometry Perspective

no code implementations25 Sep 2019 Motasem Alfarra, Adel Bibi, Hasan Hammoud, Mohamed Gaafar, Bernard Ghanem

We use tropical geometry, a new development in the area of algebraic geometry, to provide a characterization of the decision boundaries of a simple neural network of the form (Affine, ReLU, Affine).

Network Pruning

Expected Tight Bounds for Robust Deep Neural Network Training

no code implementations25 Sep 2019 Salman AlSubaihi, Adel Bibi, Modar Alfadly, Abdullah Hamdi, Bernard Ghanem

al. that bounded input intervals can be inexpensively propagated from layer to layer through deep networks.

Constrained Clustering: General Pairwise and Cardinality Constraints

1 code implementation24 Jul 2019 Adel Bibi, Ali Alqahtani, Bernard Ghanem

Extensive experiments on both synthetic and real data demonstrate when: (1) utilizing a single category of constraint, the proposed model is superior to or competitive with SOTA constrained clustering models, and (2) utilizing both categories of constraints jointly, the proposed model shows better performance than the case of the single category.

Constrained Clustering

Expected Tight Bounds for Robust Training

2 code implementations28 May 2019 Salman Al-Subaihi, Adel Bibi, Modar Alfadly, Abdullah Hamdi, Bernard Ghanem

In this paper, we closely examine the bounds of a block of layers composed in the form of Affine-ReLU-Affine.

Deep Layers as Stochastic Solvers

no code implementations ICLR 2019 Adel Bibi, Bernard Ghanem, Vladlen Koltun, Rene Ranftl

In particular, we show that a forward pass through a standard dropout layer followed by a linear layer and a non-linear activation is equivalent to optimizing a convex optimization objective with a single iteration of a $\tau$-nice Proximal Stochastic Gradient method.

Analytical Moment Regularizer for Gaussian Robust Networks

1 code implementation24 Apr 2019 Modar Alfadly, Adel Bibi, Bernard Ghanem

Despite the impressive performance of deep neural networks (DNNs) on numerous vision tasks, they still exhibit yet-to-understand uncouth behaviours.

Data Augmentation

Analytic Expressions for Probabilistic Moments of PL-DNN With Gaussian Input

no code implementations CVPR 2018 Adel Bibi, Modar Alfadly, Bernard Ghanem

Moreover, we show how these expressions can be used to systematically construct targeted and non-targeted adversarial attacks.

Image Classification

High Order Tensor Formulation for Convolutional Sparse Coding

no code implementations ICCV 2017 Adel Bibi, Bernard Ghanem

Convolutional sparse coding (CSC) has gained attention for its successful role as a reconstruction and a classification tool in the computer vision and machine learning community.

Video Reconstruction Vocal Bursts Intensity Prediction

FFTLasso: Large-Scale LASSO in the Fourier Domain

no code implementations CVPR 2017 Adel Bibi, Hani Itani, Bernard Ghanem

Since all operations in our FFTLasso method are element-wise, the subproblems are completely independent and can be trivially parallelized (e. g. on a GPU).

Dimensionality Reduction Face Recognition +2

3D Part-Based Sparse Tracker With Automatic Synchronization and Registration

no code implementations CVPR 2016 Adel Bibi, Tianzhu Zhang, Bernard Ghanem

In this paper, we present a part-based sparse tracker in a particle filter framework where both the motion and appearance model are formulated in 3D.

Occlusion Handling

In Defense of Sparse Tracking: Circulant Sparse Tracker

no code implementations CVPR 2016 Tianzhu Zhang, Adel Bibi, Bernard Ghanem

Sparse representation has been introduced to visual tracking by finding the best target candidate with minimal reconstruction error within the particle filter framework.

Visual Tracking

Cannot find the paper you are looking for? You can Submit a new open access paper.