Search Results for author: Alex Kulesza

Found 17 papers, 1 papers with code

Mean estimation in the add-remove model of differential privacy

no code implementations11 Dec 2023 Alex Kulesza, Ananda Theertha Suresh, Yuyan Wang

We propose a new algorithm and show that it is min-max optimal, achieving the best possible constant in the leading term of the mean squared error for all $\epsilon$, and that this constant is the same as the optimal algorithm under the swap model.

Subset-Based Instance Optimality in Private Estimation

no code implementations1 Mar 2023 Travis Dick, Alex Kulesza, Ziteng Sun, Ananda Theertha Suresh

We propose a new definition of instance optimality for differentially private estimation algorithms.

Combining Public and Private Data

no code implementations29 Oct 2021 Cecilia Ferrando, Jennifer Gillenwater, Alex Kulesza

We argue that our mechanism is preferable to techniques that preserve the privacy of individuals by subsampling data proportionally to the privacy needs of users.

Learning with User-Level Privacy

no code implementations NeurIPS 2021 Daniel Levy, Ziteng Sun, Kareem Amin, Satyen Kale, Alex Kulesza, Mehryar Mohri, Ananda Theertha Suresh

We show that for high-dimensional mean estimation, empirical risk minimization with smooth losses, stochastic convex optimization, and learning hypothesis classes with finite metric entropy, the privacy cost decreases as $O(1/\sqrt{m})$ as users provide more samples.

Differentially Private Quantiles

no code implementations16 Feb 2021 Jennifer Gillenwater, Matthew Joseph, Alex Kulesza

Quantiles are often used for summarizing and understanding data.

Differentially Private Covariance Estimation

no code implementations NeurIPS 2019 Kareem Amin, Travis Dick, Alex Kulesza, Andres Munoz, Sergei Vassilvitskii

The covariance matrix of a dataset is a fundamental statistic that can be used for calculating optimum regression weights as well as in many other learning and data analysis settings.

Completing State Representations using Spectral Learning

no code implementations NeurIPS 2018 Nan Jiang, Alex Kulesza, Satinder Singh

A central problem in dynamical system modeling is state discovery—that is, finding a compact summary of the past that captures the information needed to predict the future.

Expectation-Maximization for Learning Determinantal Point Processes

no code implementations NeurIPS 2014 Jennifer Gillenwater, Alex Kulesza, Emily Fox, Ben Taskar

However, log-likelihood is non-convex in the entries of the kernel matrix, and this learning problem is conjectured to be NP-hard.

Diversity Point Processes +1

Near-Optimal MAP Inference for Determinantal Point Processes

no code implementations NeurIPS 2012 Jennifer Gillenwater, Alex Kulesza, Ben Taskar

Determinantal point processes (DPPs) have recently been proposed as computationally efficient probabilistic models of diverse sets for a variety of applications, including document summarization, image search, and pose estimation.

Document Summarization Image Retrieval +2

Determinantal point processes for machine learning

5 code implementations25 Jul 2012 Alex Kulesza, Ben Taskar

Determinantal point processes (DPPs) are elegant probabilistic models of repulsion that arise in quantum physics and random matrix theory.

BIG-bench Machine Learning Point Processes

Structured Determinantal Point Processes

no code implementations NeurIPS 2010 Alex Kulesza, Ben Taskar

We present a novel probabilistic model for distributions over sets of structures -- for example, sets of sequences, trees, or graphs.

Diversity Point Processes +1

Adaptive Regularization of Weight Vectors

no code implementations NeurIPS 2009 Koby Crammer, Alex Kulesza, Mark Dredze

We present AROW, a new online learning algorithm that combines several properties of successful : large margin training, confidence weighting, and the capacity to handle non-separable data.

Learning Bounds for Domain Adaptation

no code implementations NeurIPS 2007 John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, Jennifer Wortman

Empirical risk minimization offers well-known learning guarantees when training and test data come from the same domain.

Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.