no code implementations • ICML 2020 • Ran Haba, Ehsan Kazemi, Moran Feldman, Amin Karbasi
Moreover, we propose the first streaming algorithms for monotone submodular maximization subject to $k$-extendible and $k$-system constraints.
no code implementations • 20 Feb 2023 • Hossein Esfandiari, Amin Karbasi, Vahab Mirrokni, Grigoris Velegkas, Felix Zhou
In this paper, we design replicable algorithms in the context of statistical clustering under the recently introduced notion of replicability.
no code implementations • 5 Feb 2023 • Amir Zandieh, Insu Han, Majid Daliri, Amin Karbasi
Dot-product attention mechanism plays a crucial role in modern deep architectures (e. g., Transformer) for sequence modeling, however, na\"ive exact computation of this model incurs quadratic time and memory complexities in sequence length, hindering the training of long-sequence models.
no code implementations • 23 Jan 2023 • Amin Karbasi, Kasper Green Larsen
The aim of boosting is to convert a sequence of weak learners into a strong learner.
1 code implementation • 18 Oct 2022 • Jane H. Lee, Saeid Haghighatshoar, Amin Karbasi
their weights, and (2) we propose a novel training algorithm, called \emph{forward propagation} (FP), that computes exact gradients for SNN.
no code implementations • 6 Oct 2022 • Steve Hanneke, Amin Karbasi, Mohammad Mahmoody, Idan Mehalel, Shay Moran
In this work we aim to characterize the smallest achievable error $\epsilon=\epsilon(\eta)$ by the learner in the presence of such an adversary in both realizable and agnostic settings.
no code implementations • 5 Oct 2022 • Alkis Kalavasis, Grigoris Velegkas, Amin Karbasi
Second, we consider the problem of multiclass classification with structured data (such as data lying on a low dimensional manifold or satisfying margin conditions), a setting which is captured by partial concept classes (Alon, Hanneke, Holzman and Moran, FOCS '21).
no code implementations • 4 Oct 2022 • Hossein Esfandiari, Alkis Kalavasis, Amin Karbasi, Andreas Krause, Vahab Mirrokni, Grigoris Velegkas
Similarly, for stochastic linear bandits (with finitely and infinitely many arms) we develop replicable policies that achieve the best-known problem-independent regret bounds with an optimal dependency on the replicability parameter.
2 code implementations • 9 Sep 2022 • Insu Han, Amir Zandieh, Jaehoon Lee, Roman Novak, Lechao Xiao, Amin Karbasi
Moreover, most prior works on neural kernels have focused on the ReLU activation, mainly due to its popularity but also due to the difficulty of computing such kernels for general activations.
1 code implementation • 1 Jul 2022 • Insu Han, Mike Gartrell, Elvis Dohmatob, Amin Karbasi
In this work, we develop a scalable MCMC sampling algorithm for $k$-NDPPs with low-rank kernels, thus enabling runtime that is sublinear in $n$.
1 code implementation • 2 Jun 2022 • Zebang Shen, Zhenfu Wang, Satyen Kale, Alejandro Ribeiro, Amin Karbasi, Hamed Hassani
In this paper, we exploit this concept to design a potential function of the hypothesis velocity fields, and prove that, if such a function diminishes to zero during the training procedure, the trajectory of the densities generated by the hypothesis velocity fields converges to the solution of the FPE in the Wasserstein-2 sense.
no code implementations • 26 Apr 2022 • Konstantinos E. Nikolakakis, Farzin Haddadpour, Amin Karbasi, Dionysios S. Kalogerias
For nonconvex smooth losses, we prove that full-batch GD efficiently generalizes close to any stationary point at termination, and recovers the generalization error guarantees of stochastic algorithms with fewer assumptions.
no code implementations • ICLR 2022 • Farzin Haddadpour, Mohammad Mahdi Kamani, Mehrdad Mahdavi, Amin Karbasi
To train machine learning models that are robust to distribution shifts in the data, distributionally robust optimization (DRO) has been proven very effective.
no code implementations • 3 Mar 2022 • Grigoris Velegkas, Zhuoran Yang, Amin Karbasi
In this paper, we study the problem of regret minimization for episodic Reinforcement Learning (RL) both in the model-free and the model-based setting.
no code implementations • 17 Feb 2022 • Mohammad Fereydounian, Hamed Hassani, Amin Karbasi
We prove that: (i) a GNN, as a graph function, is necessarily permutation compatible; (ii) conversely, any permutation compatible function, when restricted on input graphs with distinct node features, can be generated by a GNN; (iii) for arbitrary node features (not necessarily distinct), a simple feature augmentation scheme suffices to generate a permutation compatible function by a GNN; (iv) permutation compatibility can be verified by checking only quadratically many functional constraints, rather than an exhaustive search over all the permutations; (v) GNNs can generate \textit{any} graph function once we augment the node features with node identities, thus going beyond graph isomorphism and permutation compatibility.
no code implementations • 14 Feb 2022 • Konstantinos E. Nikolakakis, Farzin Haddadpour, Dionysios S. Kalogerias, Amin Karbasi
These bounds coincide with those for SGD, and rather surprisingly are independent of $d$, $K$ and the batch size $m$, under appropriate choices of a slightly decreased learning rate.
1 code implementation • ICLR 2022 • Insu Han, Mike Gartrell, Jennifer Gillenwater, Elvis Dohmatob, Amin Karbasi
However, existing work leaves open the question of scalable NDPP sampling.
no code implementations • 2 Jul 2021 • Javid Dadashkarimi, Amin Karbasi, Dustin Scheinost
Being able to map connectomes and derived results between different atlases without additional pre-processing is a crucial step in improving interpretation and generalization between studies that use different atlases.
no code implementations • NeurIPS 2021 • Shashank Rajput, Kartik Sreenivasan, Dimitris Papailiopoulos, Amin Karbasi
Recently, Vershynin (2020) settled a long standing question by Baum (1988), proving that \emph{deep threshold} networks can memorize $n$ points in $d$ dimensions using $\widetilde{\mathcal{O}}(e^{1/\delta^2}+\sqrt{n})$ neurons and $\widetilde{\mathcal{O}}(e^{1/\delta^2}(d+\sqrt{n})+n)$ weights, where $\delta$ is the minimum distance between the points.
no code implementations • NeurIPS 2021 • Siddharth Mitra, Moran Feldman, Amin Karbasi
It has been well established that first order optimization methods can converge to the maximal objective value of concave functions and provide constant factor approximation guarantees for (non-convex/non-concave) continuous submodular functions.
no code implementations • NeurIPS 2021 • Amin Karbasi, Vahab Mirrokni, Mohammad Shadravan
How can we make use of information parallelism in online decision making problems while efficiently balancing the exploration-exploitation trade-off?
no code implementations • 18 May 2021 • Ji Gao, Amin Karbasi, Mohammad Mahmoody
In this paper, we study PAC learnability and certification of predictions under instance-targeted poisoning attacks, where the adversary who knows the test instance may change a fraction of the training set with the goal of fooling the learner at the test instance.
no code implementations • 6 Apr 2021 • Christopher Harshaw, Ehsan Kazemi, Moran Feldman, Amin Karbasi
We propose subsampling as a unified algorithmic technique for submodular maximization in centralized and online settings.
no code implementations • 11 Mar 2021 • Zebang Shen, Hamed Hassani, Satyen Kale, Amin Karbasi
First, in the semi-heterogeneous setting, when the marginal distributions of the feature vectors on client machines are identical, we develop the federated functional gradient boosting (FFGB) method that provably converges to the global minimum.
no code implementations • 25 Feb 2021 • Quanquan Gu, Amin Karbasi, Khashayar Khosravi, Vahab Mirrokni, Dongruo Zhou
In many sequential decision-making problems, the individuals are split into several batches and the decision-maker is only allowed to change her policy at the end of batches.
no code implementations • NeurIPS 2020 • Aditya Bhaskara, Amin Karbasi, Silvio Lattanzi, Morteza Zadimoghaddam
In this paper, we provide an efficient approximation algorithm for finding the most likelihood configuration (MAP) of size $k$ for Determinantal Point Processes (DPP) in the online setting where the data points arrive in an arbitrary order and the algorithm cannot discard the selected elements from its local memory.
1 code implementation • 29 Sep 2020 • Moran Feldman, Christopher Harshaw, Amin Karbasi
We also present SubmodularGreedy. jl, a Julia package which implements these algorithms and may be downloaded at https://github. com/crharshaw/SubmodularGreedy. jl .
no code implementations • NeurIPS 2021 • Lin Chen, Yifei Min, Mikhail Belkin, Amin Karbasi
This paper explores the generalization loss of linear regression in variably parameterized families of models, both under-parameterized and over-parameterized.
no code implementations • 23 Jun 2020 • Mohammad Fereydounian, Zebang Shen, Aryan Mokhtari, Amin Karbasi, Hamed Hassani
More precisely, by assuming that Reliable-FW has access to a (stochastic) gradient oracle of the objective function and a noisy feasibility oracle of the safety polytope, it finds an $\epsilon$-approximate first-order stationary point with the optimal ${\mathcal{O}}({1}/{\epsilon^2})$ gradient oracle complexity (resp.
no code implementations • NeurIPS 2020 • Moran Feldman, Amin Karbasi
We first prove that a simple variant of the vanilla coordinate ascent, called Coordinate-Ascent+, achieves a $(\frac{e-1}{2e-1}-\varepsilon)$-approximation guarantee while performing $O(n/\varepsilon)$ iterations, where the computational complexity of each iteration is roughly $O(n/\sqrt{\varepsilon}+n\log n)$ (here, $n$ denotes the dimension of the optimization problem).
no code implementations • 19 Jun 2020 • Ruitu Xu, Lin Chen, Amin Karbasi
In this paper, we establish the ordinary differential equation (ODE) that underlies the training dynamics of Model-Agnostic Meta-Learning (MAML).
no code implementations • 17 Jun 2020 • Ehsan Tohidi, Rouhollah Amiri, Mario Coutino, David Gesbert, Geert Leus, Amin Karbasi
We introduce a variety of submodular-friendly applications, and elucidate the relation of submodularity to convexity and concavity which enables efficient optimization.
no code implementations • 16 Jun 2020 • Wenxin Li, Moran Feldman, Ehsan Kazemi, Amin Karbasi
In this paper, we provide the first deterministic algorithm that achieves the tight $1-1/e$ approximation guarantee for submodular maximization under a cardinality (size) constraint while making a number of queries that scales only linearly with the size of the ground set $n$.
no code implementations • 25 Feb 2020 • Yifei Min, Lin Chen, Amin Karbasi
In the medium adversary regime, with more training data, the generalization loss exhibits a double descent curve, which implies the existence of an intermediate stage where more training data hurts the generalization.
no code implementations • ICML 2020 • Lin Chen, Yifei Min, Mingrui Zhang, Amin Karbasi
Despite remarkable success in practice, modern machine learning models have been found to be susceptible to adversarial attacks that make human-imperceptible perturbations to the data, but result in serious and potentially dangerous prediction errors.
no code implementations • 10 Feb 2020 • Ehsan Kazemi, Shervin Minaee, Moran Feldman, Amin Karbasi
In this paper, we propose scalable methods for maximizing a regularized submodular function $f = g - \ell$ expressed as the difference between a monotone submodular function $g$ and a modular function $\ell$.
no code implementations • NeurIPS 2020 • Ashwinkumar Badanidiyuru, Amin Karbasi, Ehsan Kazemi, Jan Vondrak
In this paper, we introduce a novel technique for constrained submodular maximization, inspired by barrier functions in continuous optimization.
1 code implementation • 9 Feb 2020 • Ran Haba, Ehsan Kazemi, Moran Feldman, Amin Karbasi
In this paper, we propose a novel framework that converts streaming algorithms for monotone submodular maximization into streaming algorithms for non-monotone submodular maximization.
no code implementations • NeurIPS 2019 • Amin Karbasi, Hamed Hassani, Aryan Mokhtari, Zebang Shen
Concretely, for a monotone and continuous DR-submodular function, \SCGPP achieves a tight $[(1-1/e)\OPT -\epsilon]$ solution while using $O(1/\epsilon^2)$ stochastic gradients and $O(1/\epsilon)$ calls to the linear optimization oracle.
no code implementations • 9 Nov 2019 • Hossein Esfandiari, Amin Karbasi, Vahab Mirrokni
We propose an efficient semi adaptive policy that with $O(\log n \times\log k)$ adaptive rounds of observations can achieve an almost tight $1-1/e-\epsilon$ approximation guarantee with respect to an optimal policy that carries out $k$ actions in a fully sequential manner.
no code implementations • NeurIPS 2019 • Mingrui Zhang, Lin Chen, Hamed Hassani, Amin Karbasi
In this paper, we propose three online algorithms for submodular maximisation.
no code implementations • NeurIPS 2020 • Lin Chen, Qian Yu, Hannah Lawrence, Amin Karbasi
To establish the dimension-independent upper bound, we next show that a mini-batching algorithm provides an $ O(\frac{T}{\sqrt{K}}) $ upper bound, and therefore conclude that the minimax regret of switching-constrained OCO is $ \Theta(\frac{T}{\sqrt{K}}) $ for any $K$.
no code implementations • 11 Oct 2019 • Hossein Esfandiari, Amin Karbasi, Abbas Mehrabian, Vahab Mirrokni
We present simple and efficient algorithms for the batched stochastic multi-armed bandit and batched stochastic linear bandit problems.
no code implementations • 10 Oct 2019 • Mingrui Zhang, Zebang Shen, Aryan Mokhtari, Hamed Hassani, Amin Karbasi
One of the beauties of the projected gradient descent method lies in its rather simple mechanism and yet stable behavior with inexact, stochastic gradients, which has led to its wide-spread use in many machine learning applications.
no code implementations • 2 May 2019 • Ehsan Kazemi, Marko Mitrovic, Morteza Zadimoghaddam, Silvio Lattanzi, Amin Karbasi
We show how one can achieve the tight $(1/2)$-approximation guarantee with $O(k)$ shared memory while minimizing not only the required rounds of computations but also the total number of communicated bits.
1 code implementation • 19 Apr 2019 • Christopher Harshaw, Moran Feldman, Justin Ward, Amin Karbasi
It is generally believed that submodular functions -- and the more general class of $\gamma$-weakly submodular functions -- may only be optimized under the non-negativity assumption $f(S) \geq 0$.
no code implementations • 19 Feb 2019 • Hamed Hassani, Amin Karbasi, Aryan Mokhtari, Zebang Shen
It is known that this rate is optimal in terms of stochastic gradient evaluations.
no code implementations • 17 Feb 2019 • Mingrui Zhang, Lin Chen, Aryan Mokhtari, Hamed Hassani, Amin Karbasi
How can we efficiently mitigate the overhead of gradient communications in distributed optimization?
1 code implementation • NeurIPS 2019 • Marko Mitrovic, Ehsan Kazemi, Moran Feldman, Andreas Krause, Amin Karbasi
In many machine learning applications, one needs to interactively select a sequence of items (e. g., recommending movies based on a user's feedback) or make sequential decisions in a certain order (e. g., guiding an agent through a series of states).
no code implementations • 28 Jan 2019 • Lin Chen, Mingrui Zhang, Hamed Hassani, Amin Karbasi
In this paper, we consider the problem of black box continuous submodular maximization where we only have access to the function values and no information about the derivatives is provided.
no code implementations • 15 Nov 2018 • Lin Chen, Moran Feldman, Amin Karbasi
In this paper, we consider the unconstrained submodular maximization problem.
no code implementations • 12 Nov 2018 • Soheil Ghili, Ehsan Kazemi, Amin Karbasi
How can we control for latent discrimination in predictive models?
no code implementations • ICML 2018 • Ehsan Kazemi, Morteza Zadimoghaddam, Amin Karbasi
Can we efficiently extract useful information from a large user-generated dataset while protecting the privacy of the users and/or ensuring fairness in representation?
no code implementations • ICML 2018 • Marko Mitrovic, Ehsan Kazemi, Morteza Zadimoghaddam, Amin Karbasi
The sheer scale of modern datasets has resulted in a dire need for summarization techniques that identify representative elements in a dataset.
no code implementations • 18 May 2018 • Lin Chen, Mingrui Zhang, Amin Karbasi
In this paper, we propose the first computationally efficient projection-free algorithm for bandit convex optimization (BCO).
no code implementations • 24 Apr 2018 • Aryan Mokhtari, Hamed Hassani, Amin Karbasi
Further, for a monotone and continuous DR-submodular function and subject to a general convex body constraint, we prove that our proposed method achieves a $((1-1/e)OPT-\eps)$ guarantee with $O(1/\eps^3)$ stochastic gradient computations.
no code implementations • ICML 2018 • Lin Chen, Christopher Harshaw, Hamed Hassani, Amin Karbasi
We also propose One-Shot Frank-Wolfe, a simpler algorithm which requires only a single stochastic gradient estimate in each round and achieves an $O(T^{2/3})$ stochastic regret bound for convex and continuous submodular optimization.
no code implementations • NeurIPS 2018 • Moran Feldman, Amin Karbasi, Ehsan Kazemi
In this paper, we develop the first one-pass streaming algorithm for submodular maximization that does not evaluate the entire stream even once.
no code implementations • 20 Feb 2018 • Ehsan Kazemi, Lin Chen, Sanjoy Dasgupta, Amin Karbasi
More specifically, we aim at devising efficient algorithms to locate a target object in a database equipped with a dissimilarity metric via invocation of the weak comparison oracle.
no code implementations • 16 Feb 2018 • Lin Chen, Hamed Hassani, Amin Karbasi
For such settings, we then propose an online stochastic gradient ascent algorithm that also achieves a regret bound of $O(\sqrt{T})$ regret, albeit against a weaker $1/2$-approximation to the best feasible solution in hindsight.
no code implementations • NeurIPS 2017 • Lin Chen, Andreas Krause, Amin Karbasi
We then receive a noisy feedback about the utility of the action (e. g., ratings) which we model as a submodular function over the context-action space.
no code implementations • 20 Nov 2017 • Ehsan Kazemi, Morteza Zadimoghaddam, Amin Karbasi
Can we efficiently extract useful information from a large user-generated dataset while protecting the privacy of the users and/or ensuring fairness in representation.
no code implementations • 5 Nov 2017 • Aryan Mokhtari, Hamed Hassani, Amin Karbasi
More precisely, for a monotone and continuous DR-submodular function and subject to a \textit{general} convex body constraint, we prove that \alg achieves a $[(1-1/e)\text{OPT} -\eps]$ guarantee (in expectation) with $\mathcal{O}{(1/\eps^3)}$ stochastic gradient computations.
no code implementations • NeurIPS 2017 • Hamed Hassani, Mahdi Soltanolkotabi, Amin Karbasi
Despite the apparent lack of convexity in such functions, we prove that stochastic projected gradient methods can provide strong approximation guarantees for maximizing continuous submodular functions with convex constraints.
no code implementations • ICML 2017 • Marko Mitrovic, Mark Bun, Andreas Krause, Amin Karbasi
Many data summarization applications are captured by the general framework of submodular maximization.
no code implementations • ICML 2017 • Baharan Mirzasoleiman, Amin Karbasi, Andreas Krause
How can we summarize a dynamic data stream when elements selected for the summary can be deleted at any time?
no code implementations • ICML 2017 • Serban Stan, Morteza Zadimoghaddam, Andreas Krause, Amin Karbasi
As a remedy, we introduce the problem of sublinear time probabilistic submodular maximization: Given training examples of functions (e. g., via user feature vectors), we seek to reduce the ground set so that optimizing new functions drawn from the same distribution will provide almost as much value when restricted to the reduced ground set as when using the full set.
no code implementations • ICML 2018 • Lin Chen, Moran Feldman, Amin Karbasi
In this paper, we prove that a randomized version of the greedy algorithm (previously used by Buchbinder et al. (2014) for a different problem) achieves an approximation ratio of $(1 + 1/\gamma)^{-2}$ for the maximization of a weakly submodular function subject to a general matroid constraint, where $\gamma$ is a parameter measuring the distance of the function from submodularity.
no code implementations • 5 Apr 2017 • Moran Feldman, Christopher Harshaw, Amin Karbasi
Sample Greedy achieves $(k + 3)$-approximation with only $O(nr/k)$ function evaluations.
1 code implementation • NeurIPS 2017 • Ethan R. Elenberg, Alexandros G. Dimakis, Moran Feldman, Amin Karbasi
In many machine learning applications, it is important to explain the predictions of a black-box classifier.
no code implementations • NeurIPS 2016 • Baharan Mirzasoleiman, Morteza Zadimoghaddam, Amin Karbasi
The goal is to provide a succinct summary of massive dataset, ideally as small as possible, from which customized summaries can be built for each user, i. e. it can contain elements from the public data (for diversity) and users' private data (for personalization).
no code implementations • NeurIPS 2016 • Lin Chen, Amin Karbasi, Forrest W. Crawford
In this paper we consider a population random graph G = (V;E) from the stochastic block model (SBM) with K communities/blocks.
no code implementations • 2 May 2016 • Mario Lucic, Mesrob I. Ohannessian, Amin Karbasi, Andreas Krause
Using k-means clustering as a prototypical unsupervised learning problem, we show how we can strategically summarize the data (control space) in order to trade off risk and time when data is generated by a probabilistic model.
no code implementations • 29 Mar 2016 • Lin Chen, Forrest W. Crawford, Amin Karbasi
In real-world and online social networks, individuals receive and transmit information in real time.
no code implementations • 11 Mar 2016 • Lin Chen, Hamed Hassani, Amin Karbasi
This problem has recently gained a lot of interest in automated science and adversarial reverse engineering for which only heuristic algorithms are known.
no code implementations • NeurIPS 2015 • Baharan Mirzasoleiman, Amin Karbasi, Ashwinkumar Badanidiyuru, Andreas Krause
In this paper, we formalize this challenge as a submodular cover problem.
no code implementations • 13 Nov 2015 • Lin Chen, Forrest W. Crawford, Amin Karbasi
Learning about the social structure of hidden and hard-to-reach populations --- such as drug users and sex workers --- is a major goal of epidemiological and public health research on risk behaviors and disease prevention.
no code implementations • 6 Jun 2015 • Patrick Rebeschini, Amin Karbasi
We show that if the set function (not necessarily submodular) displays a natural notion of decay of correlation, then, for $\beta$ small enough, it is possible to design fast mixing Markov chain Monte Carlo methods that yield error bounds on marginal approximations that do not depend on the size of the set $V$.
no code implementations • 3 Nov 2014 • Baharan Mirzasoleiman, Amin Karbasi, Rik Sarkar, Andreas Krause
Such problems can often be reduced to maximizing a submodular set function subject to various constraints.
no code implementations • 28 Sep 2014 • Baharan Mirzasoleiman, Ashwinkumar Badanidiyuru, Amin Karbasi, Jan Vondrak, Andreas Krause
Is it possible to maximize a monotone submodular function faster than the widely used lazy greedy algorithm (also known as accelerated greedy), both in theory and practice?
no code implementations • 24 Jul 2014 • Amin Karbasi, Amir Hesam Salavati, Amin Shokrollahi
The resulting network has a retrieval capacity that is exponential in the size of the network.
no code implementations • 13 Mar 2014 • Amin Karbasi, Amir Hesam Salavati, Amin Shokrollahi, Lav R. Varshney
More surprisingly, we show that internal noise actually improves the performance of the recall phase while the pattern retrieval capacity remains intact, i. e., the number of stored patterns does not reduce with noise (up to a threshold).
no code implementations • 24 Feb 2014 • Shervin Javdani, Yuxin Chen, Amin Karbasi, Andreas Krause, J. Andrew Bagnell, Siddhartha Srinivasa
Instead of minimizing uncertainty per se, we consider a set of overlapping decision regions of these hypotheses.
no code implementations • 10 Feb 2014 • Adish Singla, Ilija Bogunovic, Gábor Bartók, Amin Karbasi, Andreas Krause
How should we present training examples to learners to teach them classification rules?
no code implementations • NeurIPS 2013 • Amin Karbasi, Amir Hesam Salavati, Amin Shokrollahi, Lav R. Varshney
More surprisingly, we show that internal noise actually improves the performance of the recall phase.
no code implementations • NeurIPS 2013 • Baharan Mirzasoleiman, Amin Karbasi, Rik Sarkar, Andreas Krause
Such problems can often be reduced to maximizing a submodular set function subject to cardinality constraints.
no code implementations • 26 Jan 2013 • Amin Karbasi, Amir Hesam Salavati, Amin Shokrollahi, Lav Varshney
Recent advances in associative memory design through strutured pattern sets and graph-based inference algorithms have allowed the reliable learning and retrieval of an exponential number of patterns.
no code implementations • 8 Jan 2013 • Amin Karbasi, Amir Hesam Salavati, Amin Shokrollahi
We propose a novel architecture to design a neural associative memory that is capable of learning a large number of patterns and recalling them later in presence of noise.
no code implementations • 15 Jul 2011 • Amin Karbasi, Stratis Ioannidis, Laurent Massoulie
In short, a user searching for a target object navigates through a database in the following manner: the user is asked to select the object most similar to her target from a small list of objects.