Search Results for author: Travis Dick

Found 15 papers, 3 papers with code

Scalable and Provably Accurate Algorithms for Differentially Private Distributed Decision Tree Learning

1 code implementation19 Dec 2020 Kaiwen Wang, Travis Dick, Maria-Florina Balcan

We provide the first utility guarantees for differentially private top-down decision tree learning in both the single machine and distributed settings.

Random Smoothing Might be Unable to Certify $\ell_\infty$ Robustness for High-Dimensional Images

1 code implementation10 Feb 2020 Avrim Blum, Travis Dick, Naren Manoj, Hongyang Zhang

We show a hardness result for random smoothing to achieve certified adversarial robustness against attacks in the $\ell_p$ ball of radius $\epsilon$ when $p>2$.

Adversarial Robustness

Differentially Private Covariance Estimation

no code implementations NeurIPS 2019 Kareem Amin, Travis Dick, Alex Kulesza, Andres Munoz, Sergei Vassilvitskii

The covariance matrix of a dataset is a fundamental statistic that can be used for calculating optimum regression weights as well as in many other learning and data analysis settings.

How much data is sufficient to learn high-performing algorithms? Generalization guarantees for data-driven algorithm design

no code implementations8 Aug 2019 Maria-Florina Balcan, Dan DeBlasio, Travis Dick, Carl Kingsford, Tuomas Sandholm, Ellen Vitercik

We provide a broadly applicable theory for deriving generalization guarantees that bound the difference between the algorithm's average performance over the training set and its expected performance.

Generalization Bounds

Learning piecewise Lipschitz functions in changing environments

no code implementations22 Jul 2019 Maria-Florina Balcan, Travis Dick, Dravyansh Sharma

We consider the class of piecewise Lipschitz functions, which is the most general online setting considered in the literature for the problem, and arises naturally in various combinatorial algorithm selection problems where utility functions can have sharp discontinuities.

Online Clustering

Learning to Link

no code implementations ICLR 2020 Maria-Florina Balcan, Travis Dick, Manuel Lang

Clustering is an important part of many modern data analysis pipelines, including network analysis and data retrieval.

Metric Learning

Semi-bandit Optimization in the Dispersed Setting

no code implementations18 Apr 2019 Maria-Florina Balcan, Travis Dick, Wesley Pegden

We apply our semi-bandit results to obtain the first provable guarantees for data-driven algorithm design for linkage-based clustering and we improve the best regret bounds for designing greedy knapsack algorithms.

Envy-Free Classification

no code implementations NeurIPS 2019 Maria-Florina Balcan, Travis Dick, Ritesh Noothigattu, Ariel D. Procaccia

In classic fair division problems such as cake cutting and rent division, envy-freeness requires that each individual (weakly) prefer his allocation to anyone else's.

Classification Fairness +1

Learning to Branch

no code implementations ICML 2018 Maria-Florina Balcan, Travis Dick, Tuomas Sandholm, Ellen Vitercik

Tree search algorithms recursively partition the search space to find an optimal solution.

Variable Selection

Dispersion for Data-Driven Algorithm Design, Online Learning, and Private Optimization

no code implementations8 Nov 2017 Maria-Florina Balcan, Travis Dick, Ellen Vitercik

We present general techniques for online and private optimization of the sum of dispersed piecewise Lipschitz functions.

online learning

Differentially Private Clustering in High-Dimensional Euclidean Spaces

no code implementations ICML 2017 Maria-Florina Balcan, Travis Dick, YIngyu Liang, Wenlong Mou, Hongyang Zhang

We study the problem of clustering sensitive data while preserving the privacy of individuals represented in the dataset, which has broad applications in practical machine learning and data analysis tasks.

Data Driven Resource Allocation for Distributed Learning

no code implementations15 Dec 2015 Travis Dick, Mu Li, Venkata Krishna Pillutla, Colin White, Maria Florina Balcan, Alex Smola

In distributed machine learning, data is dispatched to multiple machines for processing.

Label Efficient Learning by Exploiting Multi-class Output Codes

no code implementations10 Nov 2015 Maria Florina Balcan, Travis Dick, Yishay Mansour

We present a new perspective on the popular multi-class algorithmic techniques of one-vs-all and error correcting output codes.

Cannot find the paper you are looking for? You can Submit a new open access paper.