Search Results for author: Travis Dick

Found 19 papers, 6 papers with code

Measuring Re-identification Risk

3 code implementations12 Apr 2023 CJ Carey, Travis Dick, Alessandro Epasto, Adel Javanmard, Josh Karlin, Shankar Kumar, Andres Munoz Medina, Vahab Mirrokni, Gabriel Henrique Nunes, Sergei Vassilvitskii, Peilin Zhong

In this work, we present a new theoretical framework to measure re-identification risk in such user representations.

Subset-Based Instance Optimality in Private Estimation

no code implementations1 Mar 2023 Travis Dick, Alex Kulesza, Ziteng Sun, Ananda Theertha Suresh

We propose a new definition of instance optimality for differentially private estimation algorithms.

Confidence-Ranked Reconstruction of Census Microdata from Published Statistics

1 code implementation6 Nov 2022 Travis Dick, Cynthia Dwork, Michael Kearns, Terrance Liu, Aaron Roth, Giuseppe Vietri, Zhiwei Steven Wu

Our attacks significantly outperform those that are based only on access to a public distribution or population from which the private dataset $D$ was sampled, demonstrating that they are exploiting information in the aggregate statistics $Q(D)$, and not simply the overall structure of the distribution.

Reconstruction Attack

Learning-Augmented Private Algorithms for Multiple Quantile Release

1 code implementation20 Oct 2022 Mikhail Khodak, Kareem Amin, Travis Dick, Sergei Vassilvitskii

When applying differential privacy to sensitive data, we can often improve performance using external information such as other sensitive data, public data, or human priors.

Privacy Preserving

Scalable and Provably Accurate Algorithms for Differentially Private Distributed Decision Tree Learning

1 code implementation19 Dec 2020 Kaiwen Wang, Travis Dick, Maria-Florina Balcan

We provide the first utility guarantees for differentially private top-down decision tree learning in both the single machine and distributed settings.

Privacy Preserving

Random Smoothing Might be Unable to Certify $\ell_\infty$ Robustness for High-Dimensional Images

1 code implementation10 Feb 2020 Avrim Blum, Travis Dick, Naren Manoj, Hongyang Zhang

We show a hardness result for random smoothing to achieve certified adversarial robustness against attacks in the $\ell_p$ ball of radius $\epsilon$ when $p>2$.

Adversarial Robustness

Differentially Private Covariance Estimation

no code implementations NeurIPS 2019 Kareem Amin, Travis Dick, Alex Kulesza, Andres Munoz, Sergei Vassilvitskii

The covariance matrix of a dataset is a fundamental statistic that can be used for calculating optimum regression weights as well as in many other learning and data analysis settings.

How much data is sufficient to learn high-performing algorithms? Generalization guarantees for data-driven algorithm design

no code implementations8 Aug 2019 Maria-Florina Balcan, Dan DeBlasio, Travis Dick, Carl Kingsford, Tuomas Sandholm, Ellen Vitercik

We provide a broadly applicable theory for deriving generalization guarantees that bound the difference between the algorithm's average performance over the training set and its expected performance.

Clustering Generalization Bounds

Learning piecewise Lipschitz functions in changing environments

no code implementations22 Jul 2019 Maria-Florina Balcan, Travis Dick, Dravyansh Sharma

We consider the class of piecewise Lipschitz functions, which is the most general online setting considered in the literature for the problem, and arises naturally in various combinatorial algorithm selection problems where utility functions can have sharp discontinuities.

Clustering Online Clustering

Learning to Link

no code implementations ICLR 2020 Maria-Florina Balcan, Travis Dick, Manuel Lang

Clustering is an important part of many modern data analysis pipelines, including network analysis and data retrieval.

Clustering Metric Learning +1

Semi-bandit Optimization in the Dispersed Setting

no code implementations18 Apr 2019 Maria-Florina Balcan, Travis Dick, Wesley Pegden

We apply our semi-bandit results to obtain the first provable guarantees for data-driven algorithm design for linkage-based clustering and we improve the best regret bounds for designing greedy knapsack algorithms.

Clustering

Envy-Free Classification

no code implementations NeurIPS 2019 Maria-Florina Balcan, Travis Dick, Ritesh Noothigattu, Ariel D. Procaccia

In classic fair division problems such as cake cutting and rent division, envy-freeness requires that each individual (weakly) prefer his allocation to anyone else's.

Classification Fairness +1

Learning to Branch

no code implementations ICML 2018 Maria-Florina Balcan, Travis Dick, Tuomas Sandholm, Ellen Vitercik

Tree search algorithms recursively partition the search space to find an optimal solution.

Variable Selection

Dispersion for Data-Driven Algorithm Design, Online Learning, and Private Optimization

no code implementations8 Nov 2017 Maria-Florina Balcan, Travis Dick, Ellen Vitercik

We present general techniques for online and private optimization of the sum of dispersed piecewise Lipschitz functions.

Differentially Private Clustering in High-Dimensional Euclidean Spaces

no code implementations ICML 2017 Maria-Florina Balcan, Travis Dick, YIngyu Liang, Wenlong Mou, Hongyang Zhang

We study the problem of clustering sensitive data while preserving the privacy of individuals represented in the dataset, which has broad applications in practical machine learning and data analysis tasks.

Clustering Vocal Bursts Intensity Prediction

Data Driven Resource Allocation for Distributed Learning

no code implementations15 Dec 2015 Travis Dick, Mu Li, Venkata Krishna Pillutla, Colin White, Maria Florina Balcan, Alex Smola

In distributed machine learning, data is dispatched to multiple machines for processing.

Label Efficient Learning by Exploiting Multi-class Output Codes

no code implementations10 Nov 2015 Maria Florina Balcan, Travis Dick, Yishay Mansour

We present a new perspective on the popular multi-class algorithmic techniques of one-vs-all and error correcting output codes.

Cannot find the paper you are looking for? You can Submit a new open access paper.