Search Results for author: Fabian Latorre

Found 9 papers, 1 papers with code

Adversarial Training Should Be Cast as a Non-Zero-Sum Game

no code implementations19 Jun 2023 Alexander Robey, Fabian Latorre, George J. Pappas, Hamed Hassani, Volkan Cevher

One prominent approach toward resolving the adversarial vulnerability of deep neural networks is the two-player zero-sum paradigm of adversarial training, in which predictors are trained against adversarially chosen perturbations of data.

OTW: Optimal Transport Warping for Time Series

no code implementations1 Jun 2023 Fabian Latorre, Chenghao Liu, Doyen Sahoo, Steven C. H. Hoi

Dynamic Time Warping (DTW) has become the pragmatic choice for measuring distance between time series.

Clustering Dynamic Time Warping +1

Controlling the Complexity and Lipschitz Constant improves polynomial nets

no code implementations ICLR 2022 Zhenyu Zhu, Fabian Latorre, Grigorios G Chrysos, Volkan Cevher

While the class of Polynomial Nets demonstrates comparable performance to neural networks (NN), it currently has neither theoretical generalization characterization nor robustness guarantees.

The Effect of the Intrinsic Dimension on the Generalization of Quadratic Classifiers

no code implementations NeurIPS 2021 Fabian Latorre, Leello Tadesse Dadi, Paul Rolland, Volkan Cevher

We demonstrate this by deriving an upper bound on the Rademacher Complexity that depends on two key quantities: (i) the intrinsic dimension, which is a measure of isotropy, and (ii) the largest eigenvalue of the second moment (covariance) matrix of the distribution.

Linear Convergence of SGD on Overparametrized Shallow Neural Networks

no code implementations29 Sep 2021 Paul Rolland, Ali Ramezani-Kebrya, ChaeHwan Song, Fabian Latorre, Volkan Cevher

Despite the non-convex landscape, first-order methods can be shown to reach global minima when training overparameterized neural networks, where the number of parameters far exceed the number of training data.

Efficient Proximal Mapping of the 1-path-norm of Shallow Networks

no code implementations2 Jul 2020 Fabian Latorre, Paul Rolland, Nadav Hallak, Volkan Cevher

We demonstrate two new important properties of the 1-path-norm of shallow neural networks.

Lipschitz constant estimation of Neural Networks via sparse polynomial optimization

no code implementations ICLR 2020 Fabian Latorre, Paul Rolland, Volkan Cevher

We introduce LiPopt, a polynomial optimization framework for computing increasingly tighter upper bounds on the Lipschitz constant of neural networks.

Cannot find the paper you are looking for? You can Submit a new open access paper.