no code implementations • 29 Oct 2024 • M. Reza Ebrahimi, Jun Chen, Ashish Khisti
This paper investigates a novel lossy compression framework operating under logarithmic loss, designed to handle situations where the reconstruction distribution diverges from the source distribution.
no code implementations • 23 Oct 2024 • Ashish Khisti, M. Reza Ebrahimi, Hassan Dbouk, Arash Behboodi, Roland Memisevic, Christos Louizos
In this work we show that the optimal scheme can be decomposed into a two-step solution: in the first step an importance sampling (IS) type scheme is used to select one intermediate token; in the second step (single-draft) speculative sampling is applied to generate the output token.
no code implementations • 4 Sep 2024 • Shuangyi Chen, Yue Ju, Hardik Dalal, Zhongwen Zhu, Ashish Khisti
Parameter-Efficient Fine-Tuning (PEFT) has risen as an innovative training strategy that updates only a select few model parameters, significantly lowering both computational and memory demands.
no code implementations • 22 Jan 2024 • Sadaf Salehkalaibar, Jun Chen, Ashish Khisti, Wei Yu
We derive the RDP function for vector Gaussian sources and propose a waterfilling type solution.
1 code implementation • 16 May 2023 • Daniel Severo, James Townsend, Ashish Khisti, Alireza Makhzani
We present a one-shot method for compressing large labeled graphs called Random Edge Coding.
no code implementations • 15 May 2023 • Shuangyi Chen, Anuja Modi, Shweta Agrawal, Ashish Khisti
Vertical federated learning (VFL) enables the collaborative training of machine learning (ML) models in settings where the data is distributed amongst multiple parties who wish to protect the privacy of their individual data.
no code implementations • 24 Nov 2022 • M. Nikhil Krishnan, MohammadReza Ebrahimi, Ashish Khisti
In our second scheme, which constitutes our main contribution, we apply GC to a subset of the tasks and repetition for the remainder of the tasks.
1 code implementation • NeurIPS 2021 • Kuan-Chieh Wang, Yan Fu, Ke Li, Ashish Khisti, Richard Zemel, Alireza Makhzani
In this work, we provide a probabilistic interpretation of model inversion attacks, and formulate a variational objective that accounts for both diversity and accuracy.
1 code implementation • 15 Jul 2021 • Daniel Severo, James Townsend, Ashish Khisti, Alireza Makhzani, Karen Ullrich
Current methods which compress multisets at an optimal rate have computational complexity that scales linearly with alphabet size, making them too slow to be practical in many real-world settings.
1 code implementation • 12 Jul 2021 • Daniel Severo, Elad Domanovitz, Ashish Khisti
Our method performs well on unseen data, and is faster than previous methods proportional to a quadratic term of the dataset size.
no code implementations • NeurIPS 2021 • George Zhang, Jingjing Qian, Jun Chen, Ashish Khisti
In the context of lossy compression, Blau & Michaeli (2019) adopt a mathematical notion of perceptual quality and define the information rate-distortion-perception function, generalizing the classical rate-distortion tradeoff.
no code implementations • 26 Feb 2021 • Ali Ramezani-Kebrya, Ashish Khisti, Ben Liang
While momentum-based methods, in conjunction with stochastic gradient descent (SGD), are widely used when training machine learning models, there is little theoretical understanding on the generalization error of such methods.
1 code implementation • ICLR Workshop Neural_Compression 2021 • Yangjun Ruan, Karen Ullrich, Daniel Severo, James Townsend, Ashish Khisti, Arnaud Doucet, Alireza Makhzani, Chris J. Maddison
Naively applied, our schemes would require more initial bits than the standard bits-back coder, but we show how to drastically reduce this additional cost with couplings in the latent space.
no code implementations • NeurIPS 2020 • Nikhil Krishnan Muralee Krishnan, Seyederfan Hosseini, Ashish Khisti
Our first scheme is a modification of the polynomial coding scheme introduced by Yu et al. and places no assumptions on the straggler model.
no code implementations • 10 Jun 2020 • MohammadReza Ebrahimi, Navona Calarco, Kieran Campbell, Colin Hawco, Aristotle Voineskos, Ashish Khisti
Some recent work has implemented probabilistic models to extract a shared representation in task fMRI.
no code implementations • NeurIPS 2020 • Mahdi Haghifam, Jeffrey Negrea, Ashish Khisti, Daniel M. Roy, Gintare Karolina Dziugaite
Finally, we apply these bounds to the study of Langevin dynamics algorithm, showing that conditioning on the super sample allows us to exploit information in the optimization trajectory to obtain tighter bounds based on hypothesis tests.
no code implementations • 3 Dec 2019 • Mahdi Haghifam, Vincent Y. F. Tan, Ashish Khisti
Motivated by real-world machine learning applications, we consider a statistical classification task in a sequential setting where test samples arrive sequentially.
1 code implementation • NeurIPS 2019 • Jeffrey Negrea, Mahdi Haghifam, Gintare Karolina Dziugaite, Ashish Khisti, Daniel M. Roy
In this work, we improve upon the stepwise analysis of noisy iterative learning algorithms initiated by Pensia, Jog, and Loh (2018) and recently extended by Bu, Zou, and Veeravalli (2019).
no code implementations • ICLR 2019 • Ali Ramezani-Kebrya, Ashish Khisti, and Ben Liang
While momentum-based methods, in conjunction with the stochastic gradient descent, are widely used when training machine learning models, there is little theoretical understanding on the generalization error of such methods.
no code implementations • 12 Sep 2018 • Ali Ramezani-Kebrya, Kimon Antonakopoulos, Volkan Cevher, Ashish Khisti, Ben Liang
While momentum-based accelerated variants of stochastic gradient descent (SGD) are widely used when training machine learning models, there is little theoretical understanding on the generalization error of such methods.