1 code implementation • 20 Oct 2022 • Pavel Izmailov, Polina Kirichenko, Nate Gruver, Andrew Gordon Wilson
Deep classifiers are known to rely on spurious features $\unicode{x2013}$ patterns which are correlated with the target on the training data but not inherently relevant to the learning problem, such as the image backgrounds when classifying the foregrounds.
1 code implementation • 6 Oct 2022 • Nate Gruver, Marc Finzi, Micah Goldblum, Andrew Gordon Wilson
In order to better understand the role of equivariance in recent vision models, we introduce the Lie derivative, a method for measuring equivariance with strong mathematical foundations and minimal hyperparameters.
1 code implementation • 23 Mar 2022 • Samuel Stanton, Wesley Maddox, Nate Gruver, Phillip Maffettone, Emily Delaney, Peyton Greenside, Andrew Gordon Wilson
Bayesian optimization (BayesOpt) is a gold standard for query-efficient continuous optimization.
1 code implementation • ICLR 2022 • Nate Gruver, Marc Finzi, Samuel Stanton, Andrew Gordon Wilson
Physics-inspired neural networks (NNs), such as Hamiltonian or Lagrangian NNs, dramatically outperform other learned dynamics models by leveraging strong inductive biases.
no code implementations • 21 Mar 2020 • Shushman Choudhury, Nate Gruver, Mykel J. Kochenderfer
AIPPMS requires reasoning jointly about the effects of sensing and movement in terms of both energy expended and information gained.
no code implementations • 31 May 2019 • Nate Gruver, Ali Malik, Brahm Capoor, Chris Piech, Mitchell L. Stevens, Andreas Paepcke
Understanding large-scale patterns in student course enrollment is a problem of great interest to university administrators and educational researchers.
no code implementations • 29 Jun 2018 • Thomas Dean, Maurice Chiang, Marcus Gomez, Nate Gruver, Yousef Hindy, Michelle Lam, Peter Lu, Sophia Sanchez, Rohun Saxena, Michael Smith, Lucy Wang, Catherine Wong
This document provides an overview of the material covered in a course taught at Stanford in the spring quarter of 2018.