Search Results for author: Sivan Sabato

Found 34 papers, 9 papers with code

Bounding the fairness and accuracy of classifiers from population statistics

1 code implementation ICML 2020 Sivan Sabato, Elad Yom-Tov

We consider the study of a classification model whose properties are impossible to estimate using a validation set, either due to the absence of such a set or because access to the classifier, even as a black-box, is impossible.

Fairness

Adaptive Combinatorial Maximization: Beyond Approximate Greedy Policies

no code implementations2 Apr 2024 Shlomi Weitzman, Sivan Sabato

Our approximation guarantees simultaneously support the maximal gain ratio as well as near-submodular utility functions, and include both maximization under a cardinality constraint and a minimum cost coverage guarantee.

Active Learning

On the Capacity Limits of Privileged ERM

no code implementations5 Mar 2023 Michal Sharoni, Sivan Sabato

We provide a counter example to a claim made in that work regarding the VC dimension of the loss class induced by this problem; We conclude that the claim is incorrect.

Improved Robust Algorithms for Learning with Discriminative Feature Feedback

no code implementations8 Sep 2022 Sivan Sabato

In this work, we provide new robust interactive learning algorithms for the Discriminative Feature Feedback model, with mistake bounds that are significantly lower than those of previous robust algorithms for this setting.

Fairness and Unfairness in Binary and Multiclass Classification: Quantifying, Calculating, and Bounding

1 code implementation7 Jun 2022 Sivan Sabato, Eran Treister, Elad Yom-Tov

We propose a new interpretable measure of unfairness, that allows providing a quantitative analysis of classifier fairness, beyond a dichotomous fair/unfair distinction.

Fairness

Fast Distributed k-Means with a Small Number of Rounds

1 code implementation31 Jan 2022 Tom Hess, Ron Visbord, Sivan Sabato

Our algorithm guarantees a cost approximation factor and a number of communication rounds that depend only on the computational capacity of the coordinator.

Clustering

A Fast Algorithm for PAC Combinatorial Pure Exploration

1 code implementation8 Dec 2021 Noa Ben-David, Sivan Sabato

We provide sample complexity guarantees for our algorithm, and demonstrate in experiments its usefulness on large problems, whereas previous algorithms are impractical to run on problems of even a few dozen arms.

Active Structure Learning of Bayesian Networks in an Observational Setting

1 code implementation25 Mar 2021 Noa Ben-David, Sivan Sabato

We show that for a class of distributions that we term stable, a sample complexity reduction of up to a factor of $\widetilde{\Omega}(d^3)$ can be obtained, where $d$ is the number of network variables.

Active Learning

A Constant Approximation Algorithm for Sequential Random-Order No-Substitution k-Median Clustering

no code implementations NeurIPS 2021 Tom Hess, Michal Moshkovitz, Sivan Sabato

We give the first algorithm for this setting that obtains a constant approximation factor on the optimal risk under a random arrival order, an exponential improvement over previous work.

Clustering

Active Feature Selection for the Mutual Information Criterion

1 code implementation13 Dec 2020 Shachar Schnapp, Sivan Sabato

We study active feature selection, a novel feature selection setting in which unlabeled data is available, but the budget for labels is limited, and the examples to label can be actively selected by the algorithm.

feature selection Multi-Armed Bandits

Approximating a Target Distribution using Weight Queries

no code implementations24 Jun 2020 Nadav Barak, Sivan Sabato

The algorithm finds a reweighting of the data set that approximates the weights according to the target distribution, using a limited number of weight queries.

Domain Adaptation Multi-Armed Bandits

Robust Learning from Discriminative Feature Feedback

no code implementations9 Mar 2020 Sanjoy Dasgupta, Sivan Sabato

We show how such errors can be handled algorithmically, in both an adversarial and a stochastic setting.

Epsilon-Best-Arm Identification in Pay-Per-Reward Multi-Armed Bandits

no code implementations NeurIPS 2019 Sivan Sabato

We study epsilon-best-arm identification, in a setting where during the exploration phase, the cost of each arm pull is proportional to the expected future reward of that arm.

Multi-Armed Bandits

Universal Bayes consistency in metric spaces

no code implementations24 Jun 2019 Steve Hanneke, Aryeh Kontorovich, Sivan Sabato, Roi Weiss

This is the first learning algorithm known to enjoy this property; by comparison, the $k$-NN classifier and its variants are not generally universally Bayes-consistent, except under additional structural assumptions, such as an inner product, a norm, finite dimension, or a Besicovitch-type property.

Sequential no-Substitution k-Median-Clustering

1 code implementation30 May 2019 Tom Hess, Sivan Sabato

We provide an efficient algorithm for this setting, and show that its multiplicative approximation factor is twice the approximation factor of an efficient offline algorithm.

Clustering

Learning from discriminative feature feedback

no code implementations NeurIPS 2018 Sanjoy Dasgupta, Akansha Dey, Nicholas Roberts, Sivan Sabato

We consider the problem of learning a multi-class classifier from labels as well as simple explanations that we call "discriminative features".

The Principle of Logit Separation

no code implementations ICLR 2018 Gil Keren, Sivan Sabato, Björn Schuller

In contrast, there are known loss functions, as well as novel batch loss functions that we propose, which are aligned with this principle.

Image Retrieval

Temporal anomaly detection: calibrating the surprise

1 code implementation29 May 2017 Eyal Gutflaish, Aryeh Kontorovich, Sivan Sabato, Ofer Biller, Oded Sofer

We learn a low-rank stationary model from the training data, and then fit a regression model for predicting the expected likelihood score of normal access patterns in the future.

Anomaly Detection

Fast Single-Class Classification and the Principle of Logit Separation

2 code implementations29 May 2017 Gil Keren, Sivan Sabato, Björn Schuller

Our experiments show that indeed in almost all cases, losses that are aligned with the Principle of Logit Separation obtain at least 20% relative accuracy improvement in the SLC task compared to losses that are not aligned with it, and sometimes considerably more.

Binary Classification Classification +2

Tunable Sensitivity to Large Errors in Neural Network Training

no code implementations23 Nov 2016 Gil Keren, Sivan Sabato, Björn Schuller

We propose incorporating this idea of tunable sensitivity for hard examples in neural network learning, using a new generalization of the cross-entropy gradient step, which can be used in place of the gradient in any gradient-based training method.

Active Nearest-Neighbor Learning in Metric Spaces

no code implementations NeurIPS 2016 Aryeh Kontorovich, Sivan Sabato, Ruth Urner

We propose a pool-based non-parametric active learning algorithm for general metric spaces, called MArgin Regularized Metric Active Nearest Neighbor (MARMANN), which outputs a nearest-neighbor classifier.

Active Learning Model Selection

Submodular Learning and Covering with Response-Dependent Costs

no code implementations23 Feb 2016 Sivan Sabato

We further show that in both settings, the approximation factor of this greedy algorithm is near-optimal among all greedy algorithms.

Active Learning

Interactive algorithms: from pool to stream

no code implementations2 Feb 2016 Sivan Sabato, Tom Hess

We consider interactive algorithms in the pool-based setting, and in the stream-based setting.

Active Learning Binary Classification +1

Active Regression by Stratification

no code implementations NeurIPS 2014 Sivan Sabato, Remi Munos

We propose a new active learning algorithm for parametric linear regression with random design.

Active Learning General Classification +1

Loss minimization and parameter estimation with heavy tails

no code implementations7 Jul 2013 Daniel Hsu, Sivan Sabato

This work studies applications and generalizations of a simple estimation technique that provides exponential concentration under heavy-tailed distributions, assuming only bounded low-order moments.

regression

Auditing: Active Learning with Outcome-Dependent Query Costs

no code implementations NeurIPS 2013 Sivan Sabato, Anand D. Sarwate, Nathan Srebro

We term the setting auditing, and consider the auditing complexity of an algorithm: the number of negative labels the algorithm requires in order to learn a hypothesis with low relative error.

Active Learning Binary Classification +2

Feature Multi-Selection among Subjective Features

no code implementations18 Feb 2013 Sivan Sabato, Adam Kalai

When dealing with subjective, noisy, or otherwise nebulous features, the "wisdom of crowds" suggests that one may benefit from multiple judgments of the same feature on the same object.

regression

Learning Sparse Low-Threshold Linear Classifiers

no code implementations13 Dec 2012 Sivan Sabato, Shai Shalev-Shwartz, Nathan Srebro, Daniel Hsu, Tong Zhang

We consider the problem of learning a non-negative linear classifier with a $1$-norm of at most $k$, and a fixed threshold, under the hinge-loss.

Efficient Active Learning of Halfspaces: an Aggressive Approach

no code implementations17 Aug 2012 Alon Gonen, Sivan Sabato, Shai Shalev-Shwartz

Our efficient aggressive active learner of half-spaces has formal approximation guarantees that hold when the pool is separable with a margin.

Active Learning

Distribution-Dependent Sample Complexity of Large Margin Learning

no code implementations5 Apr 2012 Sivan Sabato, Nathan Srebro, Naftali Tishby

We obtain a tight distribution-specific characterization of the sample complexity of large-margin classification with L2 regularization: We introduce the margin-adapted dimension, which is a simple function of the second order statistics of the data distribution, and show distribution-specific upper and lower bounds on the sample complexity, both governed by the margin-adapted dimension of the data distribution.

Active Learning General Classification +1

Tight Sample Complexity of Large-Margin Learning

no code implementations NeurIPS 2010 Sivan Sabato, Nathan Srebro, Naftali Tishby

We obtain a tight distribution-specific characterization of the sample complexity of large-margin classification with L2 regularization: We introduce the gamma-adapted-dimension, which is a simple function of the spectrum of a distribution's covariance matrix, and show distribution-specific upper and lower bounds on the sample complexity, both governed by the gamma-adapted-dimension of the source distribution.

Classification General Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.