Search Results for author: Tsachy Weissman

Found 16 papers, 5 papers with code

Txt2Vid: Ultra-Low Bitrate Compression of Talking-Head Videos via Text

1 code implementation26 Jun 2021 Pulkit Tandon, Shubham Chandak, Pat Pataranutaporn, Yimeng Liu, Anesu M. Mapuranga, Pattie Maes, Tsachy Weissman, Misha Sra

Video represents the majority of internet traffic today leading to a continuous technological arms race between generating higher quality content, transmitting larger file sizes and supporting network infrastructure.

Video Compression

Rate-Distortion Theoretic Model Compression: Successive Refinement for Pruning

no code implementations16 Feb 2021 Berivan Isik, Albert No, Tsachy Weissman

We study the neural network (NN) compression problem, viewing the tension between the compression ratio and NN performance through the lens of rate-distortion theory.

Model Compression

Neural Network Compression for Noisy Storage Devices

no code implementations15 Feb 2021 Berivan Isik, Kristy Choi, Xin Zheng, Tsachy Weissman, Stefano Ermon, H. -S. Philip Wong, Armin Alaghi

We propose a radically different approach that: (i) employs analog memories to maximize the capacity of each memory cell, and (ii) jointly optimizes model compression and physical storage to maximize memory utility.

Neural Network Compression

Learning to Bid Optimally and Efficiently in Adversarial First-price Auctions

no code implementations9 Jul 2020 Yanjun Han, Zhengyuan Zhou, Aaron Flores, Erik Ordentlich, Tsachy Weissman

In this paper, we take an online learning angle and address the fundamental problem of learning to bid in repeated first-price auctions, where both the bidder's private valuations and other bidders' bids can be arbitrary.

Optimal No-regret Learning in Repeated First-price Auctions

no code implementations22 Mar 2020 Yanjun Han, Zhengyuan Zhou, Tsachy Weissman

In this paper, by exploiting the structural properties of first-price auctions, we develop the first learning algorithm that achieves $O(\sqrt{T}\log^2 T)$ regret bound when the bidder's private values are stochastically generated.

Multi-Armed Bandits

Neural Joint Source-Channel Coding

1 code implementation19 Nov 2018 Kristy Choi, Kedar Tatwawadi, Aditya Grover, Tsachy Weissman, Stefano Ermon

For reliable transmission across a noisy communication channel, classical results from information theory show that it is asymptotically optimal to separate out the source and channel coding processes.

Local moment matching: A unified methodology for symmetric functional estimation and distribution estimation under Wasserstein distance

no code implementations23 Feb 2018 Yanjun Han, Jiantao Jiao, Tsachy Weissman

We present \emph{Local Moment Matching (LMM)}, a unified methodology for symmetric functional estimation and distribution estimation under Wasserstein distance.

Entropy Rate Estimation for Markov Chains with Large State Space

no code implementations NeurIPS 2018 Yanjun Han, Jiantao Jiao, Chuan-Zheng Lee, Tsachy Weissman, Yihong Wu, Tiancheng Yu

For estimating the Shannon entropy of a distribution on $S$ elements with independent samples, [Paninski2004] showed that the sample complexity is sublinear in $S$, and [Valiant--Valiant2011] showed that consistent estimation of Shannon entropy is possible if and only if the sample size $n$ far exceeds $\frac{S}{\log S}$.

Language Modelling

Approximate Profile Maximum Likelihood

no code implementations19 Dec 2017 Dmitri S. Pavlichin, Jiantao Jiao, Tsachy Weissman

We propose an efficient algorithm for approximate computation of the profile maximum likelihood (PML), a variant of maximum likelihood maximizing the probability of observing a sufficient statistic rather than the empirical sample.

Estimating the Fundamental Limits is Easier than Achieving the Fundamental Limits

no code implementations5 Jul 2017 Jiantao Jiao, Yanjun Han, Irena Fischer-Hwang, Tsachy Weissman

We show through case studies that it is easier to estimate the fundamental limits of data processing than to construct explicit algorithms to achieve those limits.

General Classification

Demystifying ResNet

no code implementations3 Nov 2016 Sihan Li, Jiantao Jiao, Yanjun Han, Tsachy Weissman

We show that with or without nonlinearities, by adding shortcuts that have depth two, the condition number of the Hessian of the loss function at the zero initial point is depth-invariant, which makes training very deep models no more difficult than shallow ones.

Beyond Maximum Likelihood: from Theory to Practice

no code implementations26 Sep 2014 Jiantao Jiao, Kartik Venkat, Yanjun Han, Tsachy Weissman

In a nutshell, a message of this recent work is that, for a wide class of functionals, the performance of these essentially optimal estimators with $n$ samples is comparable to that of the MLE with $n \ln n$ samples.

Universal Estimation of Directed Information

2 code implementations11 Jan 2012 Jiantao Jiao, Haim H. Permuter, Lei Zhao, Young-Han Kim, Tsachy Weissman

Four estimators of the directed information rate between a pair of jointly stationary ergodic finite-alphabet processes are proposed, based on universal probability assignments.

Information Theory Information Theory

Cannot find the paper you are looking for? You can Submit a new open access paper.