1 code implementation • NeurIPS 2021 • Stefani Karp, Ezra Winston, Yuanzhi Li, Aarti Singh
We therefore propose the "local signal adaptivity" (LSA) phenomenon as one explanation for the superiority of neural networks over kernel methods.
no code implementations • 29 Sep 2021 • Zhili Feng, Ezra Winston, J Zico Kolter
In this paper, we propose a class of model that allows for \emph{exact, efficient} mean-field inference and learning in \emph{general} deep Boltzmann machines.
no code implementations • ICLR 2021 • Chirag Pabbaraju, Ezra Winston, J Zico Kolter
Several methods have been proposed in recent years to provide bounds on the Lipschitz constants of deep networks, which can be used to provide robustness guarantees, generalization bounds, and characterize the smoothness of decision boundaries.
1 code implementation • NeurIPS 2020 • Ezra Winston, J. Zico Kolter
We then develop a parameterization of the network which ensures that all operators remain monotone, which guarantees the existence of a unique equilibrium point.
no code implementations • ICML 2020 • Elan Rosenfeld, Ezra Winston, Pradeep Ravikumar, J. Zico Kolter
Machine learning algorithms are known to be susceptible to data poisoning attacks, where an adversary manipulates the training data to degrade performance of the resulting classifier.
no code implementations • 25 Sep 2019 • Elan Rosenfeld, Ezra Winston, Pradeep Ravikumar, J. Zico Kolter
This paper considers label-flipping attacks, a type of data poisoning attack where an adversary relabels a small number of examples in a training set in order to degrade the performance of the resulting classifier.
1 code implementation • ICLR Workshop LLD 2019 • Yifan Wu, Ezra Winston, Divyansh Kaushik, Zachary Lipton
Domain adaptation addresses the common problem when the target distribution generating our test data drifts from the source (training) distribution.