Search Results for author: Guillermo Ortiz-Jiménez

Found 8 papers, 7 papers with code

Catastrophic overfitting is a bug but also a feature

1 code implementation16 Jun 2022 Guillermo Ortiz-Jiménez, Pau de Jorge, Amartya Sanyal, Adel Bibi, Puneet K. Dokania, Pascal Frossard, Gregory Rogéz, Philip H. S. Torr

Despite clear computational advantages in building robust neural networks, adversarial training (AT) using single-step methods is unstable as it suffers from catastrophic overfitting (CO): Networks gain non-trivial robustness during the first stages of adversarial training, but suddenly reach a breaking point where they quickly lose all robustness in just a few iterations.

On the benefits of knowledge distillation for adversarial robustness

no code implementations14 Mar 2022 Javier Maroto, Guillermo Ortiz-Jiménez, Pascal Frossard

To that end, we present Adversarial Knowledge Distillation (AKD), a new framework to improve a model's robust performance, consisting on adversarially training a student on a mixture of the original labels and the teacher outputs.

Adversarial Robustness Knowledge Distillation

PRIME: A few primitives can boost robustness to common corruptions

1 code implementation27 Dec 2021 Apostolos Modas, Rahul Rade, Guillermo Ortiz-Jiménez, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard

Despite their impressive performance on image classification tasks, deep networks have a hard time generalizing to unforeseen corruptions of their data.

Data Augmentation Domain Generalization +1

A Structured Dictionary Perspective on Implicit Neural Representations

1 code implementation CVPR 2022 Gizem Yüce, Guillermo Ortiz-Jiménez, Beril Besbinar, Pascal Frossard

Leveraging results from harmonic analysis and deep learning theory, we show that most INR families are analogous to structured signal dictionaries whose atoms are integer harmonics of the set of initial mapping frequencies.

Dictionary Learning Inductive Bias +2

What can linearized neural networks actually say about generalization?

1 code implementation NeurIPS 2021 Guillermo Ortiz-Jiménez, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard

For certain infinitely-wide neural networks, the neural tangent kernel (NTK) theory fully characterizes generalization, but for the networks used in practice, the empirical NTK only provides a rough first-order approximation.

On the choice of graph neural network architectures

2 code implementations13 Nov 2019 Clément Vignac, Guillermo Ortiz-Jiménez, Pascal Frossard

Seminal works on graph neural networks have primarily targeted semi-supervised node classification problems with few observed labels and high-dimensional signals.

Node Classification

Sampling and Reconstruction of Signals on Product Graphs

2 code implementations30 Jun 2018 Guillermo Ortiz-Jiménez, Mario Coutino, Sundeep Prabhakar Chepuri, Geert Leus

In this paper, we consider the problem of subsampling and reconstruction of signals that reside on the vertices of a product graph, such as sensor network time series, genomic signals, or product ratings in a social network.

Active Learning Recommendation Systems +1

Sparse Sampling for Inverse Problems with Tensors

2 code implementations28 Jun 2018 Guillermo Ortiz-Jiménez, Mario Coutino, Sundeep Prabhakar Chepuri, Geert Leus

We consider the problem of designing sparse sampling strategies for multidomain signals, which can be represented using tensors that admit a known multilinear decomposition.

Information Theory Signal Processing Information Theory

Cannot find the paper you are looking for? You can Submit a new open access paper.