Search Results for author: Tavor Z. Baharav

Found 6 papers, 4 papers with code

Adaptive Data Depth via Multi-Armed Bandits

1 code implementation8 Nov 2022 Tavor Z. Baharav, Tze Leung Lai

For example, we may want to find the most central point in a data set (a generalized median), or to identify and remove all outliers (points on the fringe of the data set with low depth).

Multi-Armed Bandits

Approximate Function Evaluation via Multi-Armed Bandits

no code implementations18 Mar 2022 Tavor Z. Baharav, Gary Cheng, Mert Pilanci, David Tse

We design an instance-adaptive algorithm that learns to sample according to the importance of each coordinate, and with probability at least $1-\delta$ returns an $\epsilon$ accurate estimate of $f(\boldsymbol{\mu})$.

Multi-Armed Bandits

Enabling Efficiency-Precision Trade-offs for Label Trees in Extreme Classification

no code implementations1 Jun 2021 Tavor Z. Baharav, Daniel L. Jiang, Kedarnath Kolluri, Sujay Sanghavi, Inderjit S. Dhillon

For such applications, a common approach is to organize these labels into a tree, enabling training and inference times that are logarithmic in the number of labels.

Extreme Multi-Label Classification TAG

Adaptive Learning of Rank-One Models for Efficient Pairwise Sequence Alignment

1 code implementation NeurIPS 2020 Govinda M. Kamath, Tavor Z. Baharav, Ilan Shomorony

The second ingredient is to utilise a multi-armed bandit algorithm to adaptively refine this spectral estimator only for read pairs that are likely to have large alignments.

Ultra Fast Medoid Identification via Correlated Sequential Halving

2 code implementations11 Jun 2019 Tavor Z. Baharav, David N. Tse

Four to five orders of magnitude gains over exact computation are obtained on real data, in terms of both number of distance computations needed and wall clock time.

Bandit-Based Monte Carlo Optimization for Nearest Neighbors

1 code implementation21 May 2018 Vivek Bagaria, Tavor Z. Baharav, Govinda M. Kamath, David N. Tse

The celebrated Monte Carlo method estimates an expensive-to-compute quantity by random sampling.

Clustering Multi-Armed Bandits

Cannot find the paper you are looking for? You can Submit a new open access paper.