no code implementations • 2 May 2019 • Ehsan Kazemi, Marko Mitrovic, Morteza Zadimoghaddam, Silvio Lattanzi, Amin Karbasi
We show how one can achieve the tight $(1/2)$-approximation guarantee with $O(k)$ shared memory while minimizing not only the required rounds of computations but also the total number of communicated bits.
1 code implementation • NeurIPS 2019 • Marko Mitrovic, Ehsan Kazemi, Moran Feldman, Andreas Krause, Amin Karbasi
In many machine learning applications, one needs to interactively select a sequence of items (e. g., recommending movies based on a user's feedback) or make sequential decisions in a certain order (e. g., guiding an agent through a series of states).
no code implementations • ICML 2018 • Marko Mitrovic, Ehsan Kazemi, Morteza Zadimoghaddam, Amin Karbasi
The sheer scale of modern datasets has resulted in a dire need for summarization techniques that identify representative elements in a dataset.
no code implementations • ICML 2017 • Marko Mitrovic, Mark Bun, Andreas Krause, Amin Karbasi
Many data summarization applications are captured by the general framework of submodular maximization.