Search Results for author: Arturs Backurs

Found 10 papers, 5 papers with code

Differentially Private Fine-tuning of Language Models

1 code implementation ICLR 2022 Da Yu, Saurabh Naik, Arturs Backurs, Sivakanth Gopi, Huseyin A. Inan, Gautam Kamath, Janardhan Kulkarni, Yin Tat Lee, Andre Manoel, Lukas Wutschitz, Sergey Yekhanin, Huishuai Zhang

For example, on the MNLI dataset we achieve an accuracy of $87. 8\%$ using RoBERTa-Large and $83. 5\%$ using RoBERTa-Base with a privacy budget of $\epsilon = 6. 7$.

Text Generation

Faster Kernel Matrix Algebra via Density Estimation

no code implementations16 Feb 2021 Arturs Backurs, Piotr Indyk, Cameron Musco, Tal Wagner

In particular, we consider estimating the sum of kernel matrix entries, along with its top eigenvalue and eigenvector.

Density Estimation

Data-to-text Generation by Splicing Together Nearest Neighbors

1 code implementation EMNLP 2021 Sam Wiseman, Arturs Backurs, Karl Stratos

We propose to tackle data-to-text generation tasks by directly splicing together retrieved segments of text from "neighbor" source-target pairs.

Conditional Text Generation Data-to-Text Generation

Impossibility Results for Grammar-Compressed Linear Algebra

no code implementations NeurIPS 2020 Amir Abboud, Arturs Backurs, Karl Bringmann, Marvin Künnemann

In this paper we consider lossless compression schemes, and ask if we can run our computations on the compressed data as efficiently as if the original data was that small.

Active Local Learning

no code implementations31 Aug 2020 Arturs Backurs, Avrim Blum, Neha Gupta

In particular, the number of label queries should be independent of the complexity of $H$, and the function $h$ should be well-defined, independent of $x$.

Space and Time Efficient Kernel Density Estimation in High Dimensions

1 code implementation NeurIPS 2019 Arturs Backurs, Piotr Indyk, Tal Wagner

We instantiate our framework with the Laplacian and Exponential kernels, two popular kernels which possess the aforementioned property.

Density Estimation

Scalable Nearest Neighbor Search for Optimal Transport

1 code implementation ICML 2020 Arturs Backurs, Yihe Dong, Piotr Indyk, Ilya Razenshteyn, Tal Wagner

Our extensive experiments, on real-world text and image datasets, show that Flowtree improves over various baselines and existing methods in either running time or accuracy.

Data Structures and Algorithms

Scalable Fair Clustering

1 code implementation10 Feb 2019 Arturs Backurs, Piotr Indyk, Krzysztof Onak, Baruch Schieber, Ali Vakilian, Tal Wagner

In the fair variant of $k$-median, the points are colored, and the goal is to minimize the same average distance objective while ensuring that all clusters have an "approximately equal" number of points of each color.

Fairness

Improving Viterbi is Hard: Better Runtimes Imply Faster Clique Algorithms

no code implementations ICML 2017 Arturs Backurs, Christos Tzamos

The classic algorithm of Viterbi computes the most likely path in a Hidden Markov Model (HMM) that results in a given sequence of observations.

On the Fine-Grained Complexity of Empirical Risk Minimization: Kernel Methods and Neural Networks

no code implementations NeurIPS 2017 Arturs Backurs, Piotr Indyk, Ludwig Schmidt

We also give similar hardness results for computing the gradient of the empirical loss, which is the main computational burden in many non-convex learning tasks.

Cannot find the paper you are looking for? You can Submit a new open access paper.