no code implementations • ICML 2020 • Moein Falahatgar, Alon Orlitsky, Venkatadheeraj Pichapati
To derive these results we consider a probabilistic setting where several candidates for a position are asked multiple questions with the goal of finding who has the highest probability of answering interview questions correctly.
no code implementations • ICML 2018 • Moein Falahatgar, Ayush Jain, Alon Orlitsky, Venkatadheeraj Pichapati, Vaishakh Ravindrakumar
We present a comprehensive understanding of three important problems in PAC preference learning: maximum selection (maxing), ranking, and estimating all pairwise preference probabilities, in the adaptive setting.
no code implementations • NeurIPS 2017 • Moein Falahatgar, Yi Hao, Alon Orlitsky, Venkatadheeraj Pichapati, Vaishakh Ravindrakumar
PAC maximum selection (maxing) and ranking of $n$ elements via random pairwise comparisons have diverse applications and have been studied under many models and assumptions.
no code implementations • NeurIPS 2017 • Moein Falahatgar, Mesrob I. Ohannessian, Alon Orlitsky, Venkatadheeraj Pichapati
Minimax optimality is too pessimistic to remedy this issue.
no code implementations • ICML 2017 • Moein Falahatgar, Alon Orlitsky, Venkatadheeraj Pichapati, Ananda Theertha Suresh
We consider $(\epsilon,\delta)$-PAC maximum-selection and ranking for general probabilistic models whose comparisons probabilities satisfy strong stochastic transitivity and stochastic triangle inequality.
no code implementations • NeurIPS 2016 • Moein Falahatgar, Mesrob I. Ohannessian, Alon Orlitsky
Utilizing the structure of a probabilistic model can significantly increase its learning speed.
no code implementations • 16 Apr 2015 • Moein Falahatgar, Ashkan Jafarpour, Alon Orlitsky, Venkatadheeraj Pichapathi, Ananda Theertha Suresh
There has been considerable recent interest in distribution-tests whose run-time and sample requirements are sublinear in the domain-size $k$.