Search Results for author: Ashish Khisti

Found 17 papers, 6 papers with code

Rate-Distortion-Perception Tradeoff Based on the Conditional-Distribution Perception Measure

no code implementations22 Jan 2024 Sadaf Salehkalaibar, Jun Chen, Ashish Khisti, Wei Yu

We derive the RDP function for vector Gaussian sources and propose a waterfilling type solution.

Random Edge Coding: One-Shot Bits-Back Coding of Large Labeled Graphs

1 code implementation16 May 2023 Daniel Severo, James Townsend, Ashish Khisti, Alireza Makhzani

We present a one-shot method for compressing large labeled graphs called Random Edge Coding.

Quadratic Functional Encryption for Secure Training in Vertical Federated Learning

no code implementations15 May 2023 Shuangyi Chen, Anuja Modi, Shweta Agrawal, Ashish Khisti

Vertical federated learning (VFL) enables the collaborative training of machine learning (ML) models in settings where the data is distributed amongst multiple parties who wish to protect the privacy of their individual data.

Vertical Federated Learning

Sequential Gradient Coding For Straggler Mitigation

no code implementations24 Nov 2022 M. Nikhil Krishnan, MohammadReza Ebrahimi, Ashish Khisti

In our second scheme, which constitutes our main contribution, we apply GC to a subset of the tasks and repetition for the remainder of the tasks.

Distributed Computing

Variational Model Inversion Attacks

1 code implementation NeurIPS 2021 Kuan-Chieh Wang, Yan Fu, Ke Li, Ashish Khisti, Richard Zemel, Alireza Makhzani

In this work, we provide a probabilistic interpretation of model inversion attacks, and formulate a variational objective that accounts for both diversity and accuracy.

Compressing Multisets with Large Alphabets using Bits-Back Coding

1 code implementation15 Jul 2021 Daniel Severo, James Townsend, Ashish Khisti, Alireza Makhzani, Karen Ullrich

Current methods which compress multisets at an optimal rate have computational complexity that scales linearly with alphabet size, making them too slow to be practical in many real-world settings.

Regularized Classification-Aware Quantization

1 code implementation12 Jul 2021 Daniel Severo, Elad Domanovitz, Ashish Khisti

Our method performs well on unseen data, and is faster than previous methods proportional to a quadratic term of the dataset size.

Binary Classification Classification +1

Universal Rate-Distortion-Perception Representations for Lossy Compression

no code implementations NeurIPS 2021 George Zhang, Jingjing Qian, Jun Chen, Ashish Khisti

In the context of lossy compression, Blau & Michaeli (2019) adopt a mathematical notion of perceptual quality and define the information rate-distortion-perception function, generalizing the classical rate-distortion tradeoff.

Image Compression

On the Generalization of Stochastic Gradient Descent with Momentum

no code implementations26 Feb 2021 Ali Ramezani-Kebrya, Ashish Khisti, Ben Liang

While momentum-based methods, in conjunction with stochastic gradient descent (SGD), are widely used when training machine learning models, there is little theoretical understanding on the generalization error of such methods.

BIG-bench Machine Learning

Improving Lossless Compression Rates via Monte Carlo Bits-Back Coding

1 code implementation ICLR Workshop Neural_Compression 2021 Yangjun Ruan, Karen Ullrich, Daniel Severo, James Townsend, Ashish Khisti, Arnaud Doucet, Alireza Makhzani, Chris J. Maddison

Naively applied, our schemes would require more initial bits than the standard bits-back coder, but we show how to drastically reduce this additional cost with couplings in the latent space.

Data Compression

Coded Sequential Matrix Multiplication For Straggler Mitigation

no code implementations NeurIPS 2020 Nikhil Krishnan Muralee Krishnan, Seyederfan Hosseini, Ashish Khisti

Our first scheme is a modification of the polynomial coding scheme introduced by Yu et al. and places no assumptions on the straggler model.

Sharpened Generalization Bounds based on Conditional Mutual Information and an Application to Noisy, Iterative Algorithms

no code implementations NeurIPS 2020 Mahdi Haghifam, Jeffrey Negrea, Ashish Khisti, Daniel M. Roy, Gintare Karolina Dziugaite

Finally, we apply these bounds to the study of Langevin dynamics algorithm, showing that conditioning on the super sample allows us to exploit information in the optimization trajectory to obtain tighter bounds based on hypothesis tests.

Generalization Bounds

Sequential Classification with Empirically Observed Statistics

no code implementations3 Dec 2019 Mahdi Haghifam, Vincent Y. F. Tan, Ashish Khisti

Motivated by real-world machine learning applications, we consider a statistical classification task in a sequential setting where test samples arrive sequentially.

Classification General Classification +1

Information-Theoretic Generalization Bounds for SGLD via Data-Dependent Estimates

1 code implementation NeurIPS 2019 Jeffrey Negrea, Mahdi Haghifam, Gintare Karolina Dziugaite, Ashish Khisti, Daniel M. Roy

In this work, we improve upon the stepwise analysis of noisy iterative learning algorithms initiated by Pensia, Jog, and Loh (2018) and recently extended by Bu, Zou, and Veeravalli (2019).

Generalization Bounds

Stability of Stochastic Gradient Method with Momentum for Strongly Convex Loss Functions

no code implementations ICLR 2019 Ali Ramezani-Kebrya, Ashish Khisti, and Ben Liang

While momentum-based methods, in conjunction with the stochastic gradient descent, are widely used when training machine learning models, there is little theoretical understanding on the generalization error of such methods.

On the Generalization of Stochastic Gradient Descent with Momentum

no code implementations12 Sep 2018 Ali Ramezani-Kebrya, Kimon Antonakopoulos, Volkan Cevher, Ashish Khisti, Ben Liang

While momentum-based accelerated variants of stochastic gradient descent (SGD) are widely used when training machine learning models, there is little theoretical understanding on the generalization error of such methods.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.