no code implementations • ICML 2020 • Fabian Latorre, Paul Rolland, Shaul Nadav Hallak, Volkan Cevher
We demonstrate two new important properties of the path-norm regularizer for shallow neural networks.
no code implementations • 19 Jun 2023 • Alexander Robey, Fabian Latorre, George J. Pappas, Hamed Hassani, Volkan Cevher
One prominent approach toward resolving the adversarial vulnerability of deep neural networks is the two-player zero-sum paradigm of adversarial training, in which predictors are trained against adversarially chosen perturbations of data.
no code implementations • 1 Jun 2023 • Fabian Latorre, Chenghao Liu, Doyen Sahoo, Steven C. H. Hoi
Dynamic Time Warping (DTW) has become the pragmatic choice for measuring distance between time series.
no code implementations • ICLR 2022 • Zhenyu Zhu, Fabian Latorre, Grigorios G Chrysos, Volkan Cevher
While the class of Polynomial Nets demonstrates comparable performance to neural networks (NN), it currently has neither theoretical generalization characterization nor robustness guarantees.
no code implementations • NeurIPS 2021 • Fabian Latorre, Leello Tadesse Dadi, Paul Rolland, Volkan Cevher
We demonstrate this by deriving an upper bound on the Rademacher Complexity that depends on two key quantities: (i) the intrinsic dimension, which is a measure of isotropy, and (ii) the largest eigenvalue of the second moment (covariance) matrix of the distribution.
no code implementations • 29 Sep 2021 • Paul Rolland, Ali Ramezani-Kebrya, ChaeHwan Song, Fabian Latorre, Volkan Cevher
Despite the non-convex landscape, first-order methods can be shown to reach global minima when training overparameterized neural networks, where the number of parameters far exceed the number of training data.
1 code implementation • 3 Nov 2020 • Zhaodong Sun, Thomas Sanchez, Fabian Latorre, Volkan Cevher
When the noise level is small, it does not considerably reduce the overfitting problem.
no code implementations • 2 Jul 2020 • Fabian Latorre, Paul Rolland, Nadav Hallak, Volkan Cevher
We demonstrate two new important properties of the 1-path-norm of shallow neural networks.
no code implementations • ICLR 2020 • Fabian Latorre, Paul Rolland, Volkan Cevher
We introduce LiPopt, a polynomial optimization framework for computing increasingly tighter upper bounds on the Lipschitz constant of neural networks.