Search Results for author: Cheuk Ting Li

Found 6 papers, 3 papers with code

An Interpretable Evaluation of Entropy-based Novelty of Generative Models

no code implementations27 Feb 2024 Jingwei Zhang, Cheuk Ting Li, Farzan Farnia

The massive developments of generative model frameworks and architectures require principled methods for the evaluation of a model's novelty compared to a reference dataset or baseline generative models.

Compression with Exact Error Distribution for Federated Learning

no code implementations31 Oct 2023 Mahmoud Hegazy, Rémi Leluc, Cheuk Ting Li, Aymeric Dieuleveut

Compression schemes have been extensively used in Federated Learning (FL) to reduce the communication cost of distributed learning.

Federated Learning

An Automated Theorem Proving Framework for Information-Theoretic Results

1 code implementation29 Jan 2021 Cheuk Ting Li

We present a versatile automated theorem proving framework capable of automated discovery, simplification and proofs of inner and outer bounds in network information theory, deduction of properties of information-theoretic quantities (e. g. Wyner and G\'acs-K\"orner common information), and discovery of non-Shannon-type inequalities, under a unified framework.

Automated Theorem Proving Information Theory Information Theory

Infinite Divisibility of Information

1 code implementation13 Aug 2020 Cheuk Ting Li

sequence of random variables $Z_{1},\ldots, Z_{n}$ that contains the same information as $X$, i. e., there exists an injective function $f$ such that $X=f(Z_{1},\ldots, Z_{n})$.

Information Theory Information Theory Probability 94A15, 60F05

Efficient Approximate Minimum Entropy Coupling of Multiple Probability Distributions

1 code implementation14 Jun 2020 Cheuk Ting Li

More precisely, we construct a coupling with entropy within 2 bits from the entropy of the greatest lower bound of $p_{1},\ldots, p_{m}$ with respect to majorization.

Information Theory Information Theory Probability

Locally Weighted Learning for Naive Bayes Classifier

no code implementations21 Dec 2014 Kim-Hung Li, Cheuk Ting Li

We learn from this phenomenon that when the size of the training data is large, we should either relax the assumption or apply NB to a "reduced" data set, say for example use NB as a local model.

Cannot find the paper you are looking for? You can Submit a new open access paper.