no code implementations • 11 Dec 2023 • Alex Kulesza, Ananda Theertha Suresh, Yuyan Wang
We propose a new algorithm and show that it is min-max optimal, achieving the best possible constant in the leading term of the mean squared error for all $\epsilon$, and that this constant is the same as the optimal algorithm under the swap model.
no code implementations • 1 Mar 2023 • Travis Dick, Alex Kulesza, Ziteng Sun, Ananda Theertha Suresh
We propose a new definition of instance optimality for differentially private estimation algorithms.
no code implementations • 29 Oct 2021 • Cecilia Ferrando, Jennifer Gillenwater, Alex Kulesza
We argue that our mechanism is preferable to techniques that preserve the privacy of individuals by subsampling data proportionally to the privacy needs of users.
no code implementations • NeurIPS 2021 • Daniel Levy, Ziteng Sun, Kareem Amin, Satyen Kale, Alex Kulesza, Mehryar Mohri, Ananda Theertha Suresh
We show that for high-dimensional mean estimation, empirical risk minimization with smooth losses, stochastic convex optimization, and learning hypothesis classes with finite metric entropy, the privacy cost decreases as $O(1/\sqrt{m})$ as users provide more samples.
no code implementations • 16 Feb 2021 • Jennifer Gillenwater, Matthew Joseph, Alex Kulesza
Quantiles are often used for summarizing and understanding data.
no code implementations • NeurIPS 2019 • Kareem Amin, Travis Dick, Alex Kulesza, Andres Munoz, Sergei Vassilvitskii
The covariance matrix of a dataset is a fundamental statistic that can be used for calculating optimum regression weights as well as in many other learning and data analysis settings.
no code implementations • NeurIPS 2018 • Nan Jiang, Alex Kulesza, Satinder Singh
A central problem in dynamical system modeling is state discovery—that is, finding a compact summary of the past that captures the information needed to predict the future.
no code implementations • NeurIPS 2018 • Jennifer A. Gillenwater, Alex Kulesza, Sergei Vassilvitskii, Zelda E. Mariet
In this paper we advocate an alternative framework for applying DPPs to recommender systems.
no code implementations • 23 Nov 2014 • Nematollah Kayhan Batmanghelich, Gerald Quon, Alex Kulesza, Manolis Kellis, Polina Golland, Luke Bornn
We propose a novel diverse feature selection method based on determinantal point processes (DPPs).
no code implementations • NeurIPS 2014 • Jennifer Gillenwater, Alex Kulesza, Emily Fox, Ben Taskar
However, log-likelihood is non-convex in the entries of the kernel matrix, and this learning problem is conjectured to be NP-hard.
no code implementations • LREC 2014 • Kai Hong, John Conroy, Benoit Favre, Alex Kulesza, Hui Lin, Ani Nenkova
In the period since 2004, many novel sophisticated approaches for generic multi-document summarization have been developed.
no code implementations • NeurIPS 2012 • Jennifer Gillenwater, Alex Kulesza, Ben Taskar
Determinantal point processes (DPPs) have recently been proposed as computationally efficient probabilistic models of diverse sets for a variety of applications, including document summarization, image search, and pose estimation.
5 code implementations • 25 Jul 2012 • Alex Kulesza, Ben Taskar
Determinantal point processes (DPPs) are elegant probabilistic models of repulsion that arise in quantum physics and random matrix theory.
no code implementations • NeurIPS 2010 • Alex Kulesza, Ben Taskar
We present a novel probabilistic model for distributions over sets of structures -- for example, sets of sequences, trees, or graphs.
no code implementations • NeurIPS 2009 • Koby Crammer, Alex Kulesza, Mark Dredze
We present AROW, a new online learning algorithm that combines several properties of successful : large margin training, confidence weighting, and the capacity to handle non-separable data.
no code implementations • NeurIPS 2007 • John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, Jennifer Wortman
Empirical risk minimization offers well-known learning guarantees when training and test data come from the same domain.