no code implementations • 11 Feb 2025 • Hassan Ashtiani, Vinayak Pathak, Ruth Urner
Adversarially robust PAC learning has proved to be challenging, with the currently best known learners [Montasser et al., 2021a] relying on improper methods based on intricate compression schemes, resulting in sample complexity exponential in the VC-dimension.
no code implementations • 10 Feb 2025 • Pascale Gourdeau, Tosca Lechner, Ruth Urner
We focus on the case of finite label space and start by proposing a computable version of the Natarajan dimension and showing that it characterizes CPAC learnability in this setting.
no code implementations • 1 Dec 2024 • Alireza Torabian, Ruth Urner
Calibration is a frequently invoked concept when useful label probability estimates are required on top of classification accuracy.
no code implementations • 14 Jun 2024 • Pascale Gourdeau, Tosca Lechner, Ruth Urner
We initiate the study of computability requirements for adversarially robust learning.
no code implementations • 25 Mar 2022 • Tosca Lechner, Ruth Urner
We analyse the sample complexity for a known graph of possible manipulations in terms of the complexity of the function class and the manipulation graph.
no code implementations • 2 Mar 2022 • Hassan Ashtiani, Vinayak Pathak, Ruth Urner
In the tolerant version, the error of the learner is compared with the best achievable error with respect to a slightly larger perturbation radius $(1+\gamma)r$.
no code implementations • 24 Jun 2021 • Sadia Chowdhury, Ruth Urner
The phenomenon of adversarial examples in deep learning models has caused substantial concern over their reliability.
no code implementations • ICML 2020 • Hassan Ashtiani, Vinayak Pathak, Ruth Urner
We formally study the problem of classification under adversarial perturbations from a learner's perspective as well as a third-party who aims at certifying the robustness of a given black-box classifier.
no code implementations • 31st International Conference on Algorithmic Learning Theory 2020 • Sushant Agarwal, Nivasini Ananthakrishnan, Shai Ben-David, Tosca Lechner, Ruth Urner
We initiate a study of learning with computable learners and computable output predictors.
no code implementations • 28 May 2019 • Christina Göpfert, Shai Ben-David, Olivier Bousquet, Sylvain Gelly, Ilya Tolstikhin, Ruth Urner
In semi-supervised classification, one is given access both to labeled and unlabeled data.
no code implementations • NeurIPS 2016 • Anastasia Pentina, Ruth Urner
Better understanding of the potential benefits of information transfer and representation learning is an important step towards the goal of building intelligent systems that are able to persist in the world and learn over time.
no code implementations • NeurIPS 2016 • Aryeh Kontorovich, Sivan Sabato, Ruth Urner
We propose a pool-based non-parametric active learning algorithm for general metric spaces, called MArgin Regularized Metric Active Nearest Neighbor (MARMANN), which outputs a nearest-neighbor classifier.
no code implementations • 12 Mar 2015 • Pranjal Awasthi, Maria-Florina Balcan, Nika Haghtalab, Ruth Urner
We provide the first polynomial time algorithm that can learn linear separators to arbitrarily small excess error in this noise model under the uniform distribution over the unit ball in $\Re^d$, for some constant value of $\eta$.
no code implementations • 30 Jul 2014 • Maria-Florina Balcan, Amit Daniely, Ruta Mehta, Ruth Urner, Vijay V. Vazirani
In this work we advance this line of work by providing sample complexity guarantees and efficient algorithms for a number of important classes.
no code implementations • 26 Sep 2013 • Tameem Adel, Benn Smith, Ruth Urner, Daniel Stashuk, Daniel J. Lizotte
We present a comprehensive study of the use of generative modeling approaches for Multiple-Instance Learning (MIL) problems.