1 code implementation • NeurIPS 2021 • Joel Dapello, Jenelle Feather, Hang Le, Tiago Marques, David D. Cox, Josh H. McDermott, James J. DiCarlo, SueYeon Chung
Adversarial examples are often cited by neuroscientists and machine learning researchers as an example of how computational models diverge from biological sensory systems.
no code implementations • 21 Dec 2020 • Jianwei Yang, Jiayuan Mao, Jiajun Wu, Devi Parikh, David D. Cox, Joshua B. Tenenbaum, Chuang Gan
In contrast, symbolic and modular models have a relatively better grounding and robustness, though at the cost of accuracy.
no code implementations • 25 Oct 2020 • Akash Srivastava, Yamini Bansal, Yukun Ding, Cole Hurwitz, Kai Xu, Bernhard Egger, Prasanna Sattigeri, Josh Tenenbaum, David D. Cox, Dan Gutfreund
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the (aggregate) posterior to encourage statistical independence of the latent factors.
no code implementations • 9 Sep 2020 • Seungwook Han, Akash Srivastava, Cole Hurwitz, Prasanna Sattigeri, David D. Cox
First, we generate images in low-frequency bands by training a sampler in the wavelet domain.
no code implementations • ICLR 2020 • Akash Srivastava, Yamini Bansal, Yukun Ding, Bernhard Egger, Prasanna Sattigeri, Josh Tenenbaum, David D. Cox, Dan Gutfreund
In this work, we tackle a slightly more intricate scenario where the observations are generated from a conditional distribution of some known control variate and some latent noise variate.
no code implementations • 19 Nov 2019 • Akash Srivastava, Jessie Rosenberg, Dan Gutfreund, David D. Cox
Then an inference network (encoder)is trained to invert the decoder.
no code implementations • 3 Jun 2018 • Yamini Bansal, Madhu Advani, David D. Cox, Andrew M. Saxe
To solve this constrained optimization problem, our method employs Lagrange multipliers that act as integrators of error over training and identify `support vector'-like examples.
no code implementations • 17 Feb 2015 • Chuan-Yung Tsai, David D. Cox
A central challenge in sensory neuroscience is describing how the activity of populations of neurons can represent useful features of the external environment.
1 code implementation • SCIPY 2013 2013 • James Bergstra, Dan Yamins, David D. Cox
Sequential model-based optimization (also known as Bayesian optimization) is one of the most efficient methods (per function evaluation) of function minimization.
no code implementations • 14 Jun 2013 • James Bergstra, David D. Cox
This paper also introduces a new ensemble construction variant that combines hyperparameter optimization with the construction of ensembles.
Facial Expression Recognition (FER)
Hyperparameter Optimization
+1