Search Results for author: Tsachy Weissman

Found 25 papers, 10 papers with code

Lottery Ticket Adaptation: Mitigating Destructive Interference in LLMs

1 code implementation24 Jun 2024 Ashwinee Panda, Berivan Isik, Xiangyu Qi, Sanmi Koyejo, Tsachy Weissman, Prateek Mittal

The resulting effects, such as catastrophic forgetting of earlier tasks, make it challenging to obtain good performance on multiple tasks at the same time.

Instruction Following Math

Adaptive Compression in Federated Learning via Side Information

1 code implementation22 Jun 2023 Berivan Isik, Francesco Pase, Deniz Gunduz, Sanmi Koyejo, Tsachy Weissman, Michele Zorzi

The high communication cost of sending model updates from the clients to the server is a significant bottleneck for scalable federated learning (FL).

Federated Learning

PIM: Video Coding using Perceptual Importance Maps

no code implementations20 Dec 2022 Evgenya Pergament, Pulkit Tandon, Oren Rippel, Lubomir Bourdev, Alexander G. Anderson, Bruno Olshausen, Tsachy Weissman, Sachin Katti, Kedar Tatwawadi

The contributions of this work are threefold: (1) we introduce a web-tool which allows scalable collection of fine-grained perceptual importance, by having users interactively paint spatio-temporal maps over encoded videos; (2) we use this tool to collect a dataset with 178 videos with a total of 14443 frames of human annotated spatio-temporal importance maps over the videos; and (3) we use our curated dataset to train a lightweight machine learning model which can predict these spatio-temporal importance regions.

Video Compression

Leveraging the Hints: Adaptive Bidding in Repeated First-Price Auctions

no code implementations5 Nov 2022 Wei zhang, Yanjun Han, Zhengyuan Zhou, Aaron Flores, Tsachy Weissman

In the past four years, a particularly important development in the digital advertising industry is the shift from second-price auctions to first-price auctions for online display ads.

Marketing

Sparse Random Networks for Communication-Efficient Federated Learning

1 code implementation30 Sep 2022 Berivan Isik, Francesco Pase, Deniz Gunduz, Tsachy Weissman, Michele Zorzi

At the end of the training, the final model is a sparse network with random weights -- or a subnetwork inside the dense random network.

Federated Learning

An Interactive Annotation Tool for Perceptual Video Compression

1 code implementation8 May 2022 Evgenya Pergament, Pulkit Tandon, Kedar Tatwawadi, Oren Rippel, Lubomir Bourdev, Bruno Olshausen, Tsachy Weissman, Sachin Katti, Alexander G. Anderson

We use this tool to collect data in-the-wild (10 videos, 17 users) and utilize the obtained importance maps in the context of x264 coding to demonstrate that the tool can indeed be used to generate videos which, at the same bitrate, look perceptually better through a subjective study - and are 1. 9 times more likely to be preferred by viewers.

Video Compression

Lossy Compression of Noisy Data for Private and Data-Efficient Learning

no code implementations7 Feb 2022 Berivan Isik, Tsachy Weissman

In this sense, the utility of the data for learning is essentially maintained, while reducing storage and privacy leakage by quantifiable amounts.

Gender Classification Privacy Preserving

Txt2Vid: Ultra-Low Bitrate Compression of Talking-Head Videos via Text

1 code implementation26 Jun 2021 Pulkit Tandon, Shubham Chandak, Pat Pataranutaporn, Yimeng Liu, Anesu M. Mapuranga, Pattie Maes, Tsachy Weissman, Misha Sra

Video represents the majority of internet traffic today, driving a continual race between the generation of higher quality content, transmission of larger file sizes, and the development of network infrastructure.

Talking Face Generation Talking Head Generation +2

An Information-Theoretic Justification for Model Pruning

1 code implementation16 Feb 2021 Berivan Isik, Tsachy Weissman, Albert No

We study the neural network (NN) compression problem, viewing the tension between the compression ratio and NN performance through the lens of rate-distortion theory.

Data Compression Model Compression

Neural Network Compression for Noisy Storage Devices

no code implementations15 Feb 2021 Berivan Isik, Kristy Choi, Xin Zheng, Tsachy Weissman, Stefano Ermon, H. -S. Philip Wong, Armin Alaghi

Compression and efficient storage of neural network (NN) parameters is critical for applications that run on resource-constrained devices.

Neural Network Compression

Learning to Bid Optimally and Efficiently in Adversarial First-price Auctions

no code implementations9 Jul 2020 Yanjun Han, Zhengyuan Zhou, Aaron Flores, Erik Ordentlich, Tsachy Weissman

In this paper, we take an online learning angle and address the fundamental problem of learning to bid in repeated first-price auctions, where both the bidder's private valuations and other bidders' bids can be arbitrary.

Optimal No-regret Learning in Repeated First-price Auctions

no code implementations22 Mar 2020 Yanjun Han, Zhengyuan Zhou, Tsachy Weissman

In this paper, we develop the first learning algorithm that achieves a near-optimal $\widetilde{O}(\sqrt{T})$ regret bound, by exploiting two structural properties of first-price auctions, i. e. the specific feedback structure and payoff function.

Thompson Sampling

Neural Joint Source-Channel Coding

1 code implementation19 Nov 2018 Kristy Choi, Kedar Tatwawadi, Aditya Grover, Tsachy Weissman, Stefano Ermon

For reliable transmission across a noisy communication channel, classical results from information theory show that it is asymptotically optimal to separate out the source and channel coding processes.

Decoder

NECST: Neural Joint Source-Channel Coding

no code implementations27 Sep 2018 Kristy Choi, Kedar Tatwawadi, Tsachy Weissman, Stefano Ermon

For reliable transmission across a noisy communication channel, classical results from information theory show that it is asymptotically optimal to separate out the source and channel coding processes.

Decoder

Local moment matching: A unified methodology for symmetric functional estimation and distribution estimation under Wasserstein distance

no code implementations23 Feb 2018 Yanjun Han, Jiantao Jiao, Tsachy Weissman

We present \emph{Local Moment Matching (LMM)}, a unified methodology for symmetric functional estimation and distribution estimation under Wasserstein distance.

Entropy Rate Estimation for Markov Chains with Large State Space

no code implementations NeurIPS 2018 Yanjun Han, Jiantao Jiao, Chuan-Zheng Lee, Tsachy Weissman, Yihong Wu, Tiancheng Yu

For estimating the Shannon entropy of a distribution on $S$ elements with independent samples, [Paninski2004] showed that the sample complexity is sublinear in $S$, and [Valiant--Valiant2011] showed that consistent estimation of Shannon entropy is possible if and only if the sample size $n$ far exceeds $\frac{S}{\log S}$.

Language Modelling

Approximate Profile Maximum Likelihood

no code implementations19 Dec 2017 Dmitri S. Pavlichin, Jiantao Jiao, Tsachy Weissman

We propose an efficient algorithm for approximate computation of the profile maximum likelihood (PML), a variant of maximum likelihood maximizing the probability of observing a sufficient statistic rather than the empirical sample.

Estimating the Fundamental Limits is Easier than Achieving the Fundamental Limits

no code implementations5 Jul 2017 Jiantao Jiao, Yanjun Han, Irena Fischer-Hwang, Tsachy Weissman

We show through case studies that it is easier to estimate the fundamental limits of data processing than to construct explicit algorithms to achieve those limits.

Binary Classification Data Compression +1

Demystifying ResNet

no code implementations3 Nov 2016 Sihan Li, Jiantao Jiao, Yanjun Han, Tsachy Weissman

We show that with or without nonlinearities, by adding shortcuts that have depth two, the condition number of the Hessian of the loss function at the zero initial point is depth-invariant, which makes training very deep models no more difficult than shallow ones.

Beyond Maximum Likelihood: from Theory to Practice

no code implementations26 Sep 2014 Jiantao Jiao, Kartik Venkat, Yanjun Han, Tsachy Weissman

In a nutshell, a message of this recent work is that, for a wide class of functionals, the performance of these essentially optimal estimators with $n$ samples is comparable to that of the MLE with $n \ln n$ samples.

Universal Estimation of Directed Information

3 code implementations11 Jan 2012 Jiantao Jiao, Haim H. Permuter, Lei Zhao, Young-Han Kim, Tsachy Weissman

Four estimators of the directed information rate between a pair of jointly stationary ergodic finite-alphabet processes are proposed, based on universal probability assignments.

Information Theory Information Theory

Cannot find the paper you are looking for? You can Submit a new open access paper.