no code implementations • 7 Feb 2023 • Hao Liang, Josue Ortega Caro, Vikram Maheshri, Ankit B. Patel, Guha Balakrishnan
Our framework is experimental, in that we train several versions of a network with an intervention to a specific hyperparameter, and measure the resulting causal effect of this choice on performance bias when a particular out-of-distribution image perturbation is applied.
1 code implementation • 31 Jan 2023 • Antonio H. de O. Fonseca, Emanuele Zappala, Josue Ortega Caro, David van Dijk
Modeling spatiotemporal dynamical systems is a fundamental challenge in machine learning.
no code implementations • 17 Oct 2022 • Syed Asad Rizvi, Nhi Nguyen, Haoran Lyu, Benjamin Christensen, Josue Ortega Caro, Antonio H. O. Fonseca, Emanuele Zappala, Maryam Bagherian, Christopher Averill, Chadi G. Abdallah, Amin Karbasi, Rex Ying, Maria Brbic, Rahul Madhav Dhodapkar, David van Dijk
Foundation models have revolutionized the landscape of Deep Learning (DL), serving as a versatile platform which can be adapted to a wide range of downstream tasks.
1 code implementation • 30 Sep 2022 • Emanuele Zappala, Antonio Henrique de Oliveira Fonseca, Josue Ortega Caro, David van Dijk
In this paper, we introduce Neural Integral Equations (NIE), a method that learns an unknown integral operator from data through an IE solver.
1 code implementation • 16 Mar 2022 • Nikos Karantzas, Emma Besier, Josue Ortega Caro, Xaq Pitkow, Andreas S. Tolias, Ankit B. Patel, Fabio Anselmi
Our results also indicate that the essential frequencies in question are effectively the ones used to achieve generalization in the first place.
no code implementations • 4 Aug 2020 • Justin Sahs, Ryan Pyle, Aneel Damaraju, Josue Ortega Caro, Onur Tavaslioglu, Andy Lu, Ankit Patel
Our implicit regularization results are complementary to recent work arXiv:1906. 07842, done independently, which showed that initialization scale critically controls implicit regularization via a kernel-based argument.
no code implementations • 19 Jun 2020 • Josue Ortega Caro, Yilong Ju, Ryan Pyle, Sourav Dey, Wieland Brendel, Fabio Anselmi, Ankit Patel
Inspired by theoretical work on linear full-width convolutional models, we hypothesize that the local (i. e. bounded-width) convolutional operations commonly used in current neural networks are implicitly biased to learn high frequency features, and that this is one of the root causes of high frequency adversarial examples.
no code implementations • 25 Sep 2019 • Justin Sahs, Aneel Damaraju, Ryan Pyle, Onur Tavaslioglu, Josue Ortega Caro, Hao Yang Lu, Ankit Patel
Despite their popularity and successes, deep neural networks are poorly understood theoretically and treated as 'black box' systems.
1 code implementation • 7 Jun 2017 • Hanlin Tang, Martin Schrimpf, Bill Lotter, Charlotte Moerman, Ana Paredes, Josue Ortega Caro, Walter Hardesty, David Cox, Gabriel Kreiman
First, subjects robustly recognized objects even when rendered <15% visible, but recognition was largely impaired when processing was interrupted by backward masking.