Search Results for author: David D. Cox

Found 11 papers, 2 papers with code

Hyperopt: A Python Library for Optimizing the Hyperparameters of Machine Learning Algorithms

1 code implementation SCIPY 2013 2013 James Bergstra, Dan Yamins, David D. Cox

Sequential model-based optimization (also known as Bayesian optimization) is one of the most efficient methods (per function evaluation) of function minimization.

Bayesian Optimization BIG-bench Machine Learning +2

Measuring and Understanding Sensory Representations within Deep Networks Using a Numerical Optimization Framework

no code implementations17 Feb 2015 Chuan-Yung Tsai, David D. Cox

A central challenge in sensory neuroscience is describing how the activity of populations of neurons can represent useful features of the external environment.

Object Recognition

Minnorm training: an algorithm for training over-parameterized deep neural networks

no code implementations3 Jun 2018 Yamini Bansal, Madhu Advani, David D. Cox, Andrew M. Saxe

To solve this constrained optimization problem, our method employs Lagrange multipliers that act as integrators of error over training and identify `support vector'-like examples.

Generalization Bounds

CZ-GEM: A FRAMEWORK FOR DISENTANGLED REPRESENTATION LEARNING

no code implementations ICLR 2020 Akash Srivastava, Yamini Bansal, Yukun Ding, Bernhard Egger, Prasanna Sattigeri, Josh Tenenbaum, David D. Cox, Dan Gutfreund

In this work, we tackle a slightly more intricate scenario where the observations are generated from a conditional distribution of some known control variate and some latent noise variate.

Disentanglement

Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modelling

no code implementations25 Oct 2020 Akash Srivastava, Yamini Bansal, Yukun Ding, Cole Hurwitz, Kai Xu, Bernhard Egger, Prasanna Sattigeri, Josh Tenenbaum, David D. Cox, Dan Gutfreund

Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the (aggregate) posterior to encourage statistical independence of the latent factors.

Disentanglement

Object-Centric Diagnosis of Visual Reasoning

no code implementations21 Dec 2020 Jianwei Yang, Jiayuan Mao, Jiajun Wu, Devi Parikh, David D. Cox, Joshua B. Tenenbaum, Chuang Gan

In contrast, symbolic and modular models have a relatively better grounding and robustness, though at the cost of accuracy.

Object Question Answering +2

Neural Population Geometry Reveals the Role of Stochasticity in Robust Perception

1 code implementation NeurIPS 2021 Joel Dapello, Jenelle Feather, Hang Le, Tiago Marques, David D. Cox, Josh H. McDermott, James J. DiCarlo, SueYeon Chung

Adversarial examples are often cited by neuroscientists and machine learning researchers as an example of how computational models diverge from biological sensory systems.

Adversarial Robustness

LAB: Large-Scale Alignment for ChatBots

no code implementations2 Mar 2024 Shivchander Sudalairaj, Abhishek Bhandwaldar, Aldo Pareja, Kai Xu, David D. Cox, Akash Srivastava

This work introduces LAB (Large-scale Alignment for chatBots), a novel methodology designed to overcome the scalability challenges in the instruction-tuning phase of large language model (LLM) training.

Instruction Following Language Modelling +2

Cannot find the paper you are looking for? You can Submit a new open access paper.