Search Results for author: Roy Frostig

Found 15 papers, 3 papers with code

Learning from many trajectories

no code implementations31 Mar 2022 Stephen Tu, Roy Frostig, Mahdi Soltanolkotabi

Specifically, we establish that the worst-case error rate of this problem is $\Theta(n / m T)$ whenever $m \gtrsim n$.

Learning Theory

Efficient and Modular Implicit Differentiation

1 code implementation NeurIPS 2021 Mathieu Blondel, Quentin Berthet, Marco Cuturi, Roy Frostig, Stephan Hoyer, Felipe Llinares-López, Fabian Pedregosa, Jean-Philippe Vert

In this paper, we propose automatic implicit differentiation, an efficient and modular approach for implicit differentiation of optimization problems.


Decomposing reverse-mode automatic differentiation

no code implementations20 May 2021 Roy Frostig, Matthew J. Johnson, Dougal Maclaurin, Adam Paszke, Alexey Radul

We decompose reverse-mode automatic differentiation into (forward-mode) linearization followed by transposition.

The advantages of multiple classes for reducing overfitting from test set reuse

no code implementations24 May 2019 Vitaly Feldman, Roy Frostig, Moritz Hardt

We show a new upper bound of $\tilde O(\max\{\sqrt{k\log(n)/(mn)}, k/n\})$ on the worst-case bias that any attack can achieve in a prediction problem with $m$ classes.

Binary Classification

Measuring the Effects of Data Parallelism on Neural Network Training

no code implementations8 Nov 2018 Christopher J. Shallue, Jaehoon Lee, Joseph Antognini, Jascha Sohl-Dickstein, Roy Frostig, George E. Dahl

Along the way, we show that disagreements in the literature on how batch size affects model quality can largely be explained by differences in metaparameter tuning and compute budgets at different batch sizes.

Random Features for Compositional Kernels

no code implementations22 Mar 2017 Amit Daniely, Roy Frostig, Vineet Gupta, Yoram Singer

We describe and analyze a simple random feature scheme (RFS) from prescribed compositional kernels.

Estimation from Indirect Supervision with Linear Moments

1 code implementation10 Aug 2016 Aditi Raghunathan, Roy Frostig, John Duchi, Percy Liang

In structured prediction problems where we have indirect supervision of the output, maximum marginal likelihood faces two computational obstacles: non-convexity of the objective and intractability of even a single gradient computation.

Structured Prediction

Principal Component Projection Without Principal Component Analysis

no code implementations22 Feb 2016 Roy Frostig, Cameron Musco, Christopher Musco, Aaron Sidford

To achieve our results, we first observe that ridge regression can be used to obtain a "smooth projection" onto the top principal components.


Toward Deeper Understanding of Neural Networks: The Power of Initialization and a Dual View on Expressivity

no code implementations NeurIPS 2016 Amit Daniely, Roy Frostig, Yoram Singer

We develop a general duality between neural networks and compositional kernels, striving towards a better understanding of deep learning.

Competing with the Empirical Risk Minimizer in a Single Pass

no code implementations20 Dec 2014 Roy Frostig, Rong Ge, Sham M. Kakade, Aaron Sidford

In the absence of computational constraints, the minimizer of a sample average of observed data -- commonly referred to as either the empirical risk minimizer (ERM) or the $M$-estimator -- is widely regarded as the estimation strategy of choice due to its desirable statistical convergence properties.

Simple MAP Inference via Low-Rank Relaxations

1 code implementation NeurIPS 2014 Roy Frostig, Sida Wang, Percy S. Liang, Christopher D. Manning

We focus on the problem of maximum a posteriori (MAP) inference in Markov random fields with binary variables and pairwise interactions.

Relaxations for inference in restricted Boltzmann machines

no code implementations21 Dec 2013 Sida I. Wang, Roy Frostig, Percy Liang, Christopher D. Manning

We propose a relaxation-based approximate inference algorithm that samples near-MAP configurations of a binary pairwise Markov random field.

Cannot find the paper you are looking for? You can Submit a new open access paper.