1 code implementation • 19 Dec 2023 • Angela Castillo, Jonas Kohler, Juan C. Pérez, Juan Pablo Pérez, Albert Pumarola, Bernard Ghanem, Pablo Arbeláez, Ali Thabet
Our findings provide insights into the efficiency of the conditional denoising process that contribute to more practical and swift deployment of text-conditioned diffusion models.
no code implementations • 15 Jun 2023 • Juan C. Pérez, Sara Rojas, Jesus Zarzar, Bernard Ghanem
We found that introducing image augmentations during training presents challenges such as geometric and photometric inconsistencies for learning NRMs from images.
1 code implementation • 10 Apr 2023 • Motasem Alfarra, Hani Itani, Alejandro Pardo, Shyma Alhuwaider, Merey Ramazanova, Juan C. Pérez, Zhipeng Cai, Matthias Müller, Bernard Ghanem
To address this issue, we propose a more realistic evaluation protocol for TTA methods, where data is received in an online fashion from a constant-speed data stream, thereby accounting for the method's adaptation speed.
1 code implementation • 6 Jun 2022 • Motasem Alfarra, Juan C. Pérez, Egor Shulgin, Peter Richtárik, Bernard Ghanem
However, as in the single-node supervised learning setup, models trained in federated learning suffer from vulnerability to imperceptible input transformations known as adversarial attacks, questioning their deployment in security-related applications.
1 code implementation • CVPR 2022 • Gabriel Pérez S., Juan C. Pérez, Motasem Alfarra, Silvio Giancola, Bernard Ghanem
In this work, we propose 3DeformRS, a method to certify the robustness of point cloud Deep Neural Networks (DNNs) against real-world deformations.
no code implementations • 10 Feb 2022 • Juan C. Pérez, Motasem Alfarra, Ali Thabet, Pablo Arbeláez, Bernard Ghanem
We propose a methodology for assessing and characterizing the robustness of FRMs against semantic perturbations to their input.
1 code implementation • 31 Jan 2022 • Motasem Alfarra, Juan C. Pérez, Anna Frühstück, Philip H. S. Torr, Peter Wonka, Bernard Ghanem
Finally, we show that the FID can be robustified by simply replacing the standard Inception with a robust Inception.
1 code implementation • 25 Aug 2021 • Angela Castillo, María Escobar, Juan C. Pérez, Andrés Romero, Radu Timofte, Luc van Gool, Pablo Arbeláez
Instead of learning a dataset-specific degradation, we employ adversarial attacks to create difficult examples that target the model's weaknesses.
1 code implementation • 29 Jul 2021 • Juan C. Pérez, Motasem Alfarra, Guillaume Jeanneret, Laura Rueda, Ali Thabet, Bernard Ghanem, Pablo Arbeláez
Deep learning models are prone to being fooled by imperceptible perturbations known as adversarial attacks.
2 code implementations • 9 Jul 2021 • Laura Daza, Juan C. Pérez, Pablo Arbeláez
The reliability of Deep Learning systems depends on their accuracy but also on their robustness against adversarial perturbations to the input data.
1 code implementation • ICML Workshop AML 2021 • Motasem Alfarra, Juan C. Pérez, Ali Thabet, Adel Bibi, Philip H. S. Torr, Bernard Ghanem
Deep neural networks are vulnerable to small input perturbations known as adversarial attacks.
1 code implementation • 13 Jun 2020 • Motasem Alfarra, Juan C. Pérez, Adel Bibi, Ali Thabet, Pablo Arbeláez, Bernard Ghanem
This paper studies how encouraging semantically-aligned features during deep neural network training can increase network robustness.
1 code implementation • ECCV 2020 • Juan C. Pérez, Motasem Alfarra, Guillaume Jeanneret, Adel Bibi, Ali Thabet, Bernard Ghanem, Pablo Arbeláez
We revisit the benefits of merging classical vision concepts with deep learning models.
2 code implementations • ECCV 2018 • Edgar Margffoy-Tuay, Juan C. Pérez, Emilio Botero, Pablo Arbeláez
We address the problem of segmenting an object given a natural language expression that describes it.