Search Results for author: Ruth Urner

Found 15 papers, 0 papers with code

Simplifying Adversarially Robust PAC Learning with Tolerance

no code implementations11 Feb 2025 Hassan Ashtiani, Vinayak Pathak, Ruth Urner

Adversarially robust PAC learning has proved to be challenging, with the currently best known learners [Montasser et al., 2021a] relying on improper methods based on intricate compression schemes, resulting in sample complexity exponential in the VC-dimension.

PAC learning

On the Computability of Multiclass PAC Learning

no code implementations10 Feb 2025 Pascale Gourdeau, Tosca Lechner, Ruth Urner

We focus on the case of finite label space and start by proposing a computable version of the Natarajan dimension and showing that it characterizes CPAC learnability in this setting.

PAC learning

Calibration through the Lens of Interpretability

no code implementations1 Dec 2024 Alireza Torabian, Ruth Urner

Calibration is a frequently invoked concept when useful label probability estimates are required on top of classification accuracy.

On the Computability of Robust PAC Learning

no code implementations14 Jun 2024 Pascale Gourdeau, Tosca Lechner, Ruth Urner

We initiate the study of computability requirements for adversarially robust learning.

PAC learning

Learning Losses for Strategic Classification

no code implementations25 Mar 2022 Tosca Lechner, Ruth Urner

We analyse the sample complexity for a known graph of possible manipulations in terms of the complexity of the function class and the manipulation graph.

Classification Learning Theory +1

Adversarially Robust Learning with Tolerance

no code implementations2 Mar 2022 Hassan Ashtiani, Vinayak Pathak, Ruth Urner

In the tolerant version, the error of the learner is compared with the best achievable error with respect to a slightly larger perturbation radius $(1+\gamma)r$.

PAC learning

On the (Un-)Avoidability of Adversarial Examples

no code implementations24 Jun 2021 Sadia Chowdhury, Ruth Urner

The phenomenon of adversarial examples in deep learning models has caused substantial concern over their reliability.

Adversarial Robustness Data Augmentation

Black-box Certification and Learning under Adversarial Perturbations

no code implementations ICML 2020 Hassan Ashtiani, Vinayak Pathak, Ruth Urner

We formally study the problem of classification under adversarial perturbations from a learner's perspective as well as a third-party who aims at certifying the robustness of a given black-box classifier.

When can unlabeled data improve the learning rate?

no code implementations28 May 2019 Christina Göpfert, Shai Ben-David, Olivier Bousquet, Sylvain Gelly, Ilya Tolstikhin, Ruth Urner

In semi-supervised classification, one is given access both to labeled and unlabeled data.

Lifelong Learning with Weighted Majority Votes

no code implementations NeurIPS 2016 Anastasia Pentina, Ruth Urner

Better understanding of the potential benefits of information transfer and representation learning is an important step towards the goal of building intelligent systems that are able to persist in the world and learn over time.

Diversity Representation Learning

Active Nearest-Neighbor Learning in Metric Spaces

no code implementations NeurIPS 2016 Aryeh Kontorovich, Sivan Sabato, Ruth Urner

We propose a pool-based non-parametric active learning algorithm for general metric spaces, called MArgin Regularized Metric Active Nearest Neighbor (MARMANN), which outputs a nearest-neighbor classifier.

Active Learning Model Selection

Efficient Learning of Linear Separators under Bounded Noise

no code implementations12 Mar 2015 Pranjal Awasthi, Maria-Florina Balcan, Nika Haghtalab, Ruth Urner

We provide the first polynomial time algorithm that can learn linear separators to arbitrarily small excess error in this noise model under the uniform distribution over the unit ball in $\Re^d$, for some constant value of $\eta$.

Active Learning Learning Theory

Learning Economic Parameters from Revealed Preferences

no code implementations30 Jul 2014 Maria-Florina Balcan, Amit Daniely, Ruta Mehta, Ruth Urner, Vijay V. Vazirani

In this work we advance this line of work by providing sample complexity guarantees and efficient algorithms for a number of important classes.

Open-Ended Question Answering

Generative Multiple-Instance Learning Models For Quantitative Electromyography

no code implementations26 Sep 2013 Tameem Adel, Benn Smith, Ruth Urner, Daniel Stashuk, Daniel J. Lizotte

We present a comprehensive study of the use of generative modeling approaches for Multiple-Instance Learning (MIL) problems.

Multiple Instance Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.