no code implementations • 8 May 2024 • Shaddin Dughmi, Yusuf Kalayci, Grayson York
Our results imply that transductive and PAC learning are essentially equivalent for supervised learning with pseudometric losses in the realizable setting, and for binary classification in the agnostic setting.
no code implementations • 15 Feb 2024 • Julian Asilis, Siddartha Devic, Shaddin Dughmi, Vatsal Sharan, Shang-Hua Teng
We demonstrate a compactness result holding broadly across supervised learning with a general class of loss functions: Any hypothesis class $H$ is learnable with transductive sample complexity $m$ precisely when all of its finite projections are learnable with sample complexity $m$.
no code implementations • 24 Sep 2023 • Julian Asilis, Siddartha Devic, Shaddin Dughmi, Vatsal Sharan, Shang-Hua Teng
We demonstrate that an agnostic version of the Hall complexity again characterizes error rates exactly, and exhibit an optimal learner using maximum entropy programs.
no code implementations • 21 Nov 2017 • Yu Cheng, Shaddin Dughmi, David Kempe
Our main result is a clean and tight characterization of positional voting rules that have constant expected distortion (independent of the number of candidates and the metric space).
no code implementations • 4 May 2017 • Yu Cheng, Shaddin Dughmi, David Kempe
However, we show that independence alone is not enough to achieve the upper bound: even when candidates are drawn independently, if the population of candidates can be different from the voters, then an upper bound of $2$ on the approximation is tight.
no code implementations • 11 Mar 2017 • Haifeng Xu, Milind Tambe, Shaddin Dughmi, Venil Loyd Noronha
To mitigate this issue, we propose to design entropy-maximizing defending strategies for spatio-temporal security games, which frequently suffer from CoC.
no code implementations • 23 Apr 2015 • Haifeng Xu, Albert X. Jiang, Arunesh Sinha, Zinovi Rabinovich, Shaddin Dughmi, Milind Tambe
Our experiments confirm the necessity of handling information leakage and the advantage of our algorithms.
no code implementations • 20 Aug 2011 • Moshe Babaioff, Shaddin Dughmi, Robert Kleinberg, Aleksandrs Slivkins
The performance guarantee for the same mechanism can be improved to $O(\sqrt{k} \log n)$, with a distribution-dependent constant, if $k/n$ is sufficiently small.