Search Results for author: Ezra Winston

Found 8 papers, 3 papers with code

Monotone deep Boltzmann machines

no code implementations11 Jul 2023 Zhili Feng, Ezra Winston, J. Zico Kolter

Deep Boltzmann machines (DBMs), one of the first ``deep'' learning methods ever studied, are multi-layered probabilistic models governed by a pairwise energy function that describes the likelihood of all variables/nodes in the network.

Local Signal Adaptivity: Provable Feature Learning in Neural Networks Beyond Kernels

1 code implementation NeurIPS 2021 Stefani Karp, Ezra Winston, Yuanzhi Li, Aarti Singh

We therefore propose the "local signal adaptivity" (LSA) phenomenon as one explanation for the superiority of neural networks over kernel methods.

Image Classification

Estimating Lipschitz constants of monotone deep equilibrium models

no code implementations ICLR 2021 Chirag Pabbaraju, Ezra Winston, J Zico Kolter

Several methods have been proposed in recent years to provide bounds on the Lipschitz constants of deep networks, which can be used to provide robustness guarantees, generalization bounds, and characterize the smoothness of decision boundaries.

Generalization Bounds

Monotone operator equilibrium networks

1 code implementation NeurIPS 2020 Ezra Winston, J. Zico Kolter

We then develop a parameterization of the network which ensures that all operators remain monotone, which guarantees the existence of a unique equilibrium point.

Certified Robustness to Label-Flipping Attacks via Randomized Smoothing

no code implementations ICML 2020 Elan Rosenfeld, Ezra Winston, Pradeep Ravikumar, J. Zico Kolter

Machine learning algorithms are known to be susceptible to data poisoning attacks, where an adversary manipulates the training data to degrade performance of the resulting classifier.

Data Poisoning General Classification +1

Certified Robustness to Adversarial Label-Flipping Attacks via Randomized Smoothing

no code implementations25 Sep 2019 Elan Rosenfeld, Ezra Winston, Pradeep Ravikumar, J. Zico Kolter

This paper considers label-flipping attacks, a type of data poisoning attack where an adversary relabels a small number of examples in a training set in order to degrade the performance of the resulting classifier.

Binary Classification Data Poisoning

Domain Adaptation with Asymmetrically-Relaxed Distribution Alignment

1 code implementation ICLR Workshop LLD 2019 Yifan Wu, Ezra Winston, Divyansh Kaushik, Zachary Lipton

Domain adaptation addresses the common problem when the target distribution generating our test data drifts from the source (training) distribution.

Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.