no code implementations • 29 Sep 2021 • Junyu Chen, Evren Asma, Chung Chan
In this study, we present Targeted Gradient Descent (TGD), a novel fine-tuning method that can extend a pre-trained network to a new task without revisiting data from the previous task while preserving the knowledge acquired from previous training.
1 code implementation • 30 May 2019 • Chung Chan, Ali Al-Bashabsheh, Hing Pang Huang, Michael Lim, Da Sun Handason Tam, Chao Zhao
In particular, we show that MI-NEE reduces to MINE in the special case when the reference distribution is the product of marginal distributions, but faster convergence is possible by choosing the uniform distribution as the reference distribution instead.
no code implementations • 18 Jan 2017 • Chung Chan, Ali Al-Bashabsheh, Qiaoqiao Zhou
An agglomerative clustering of random variables is proposed, where clusters of random variables sharing the maximum amount of multivariate mutual information are merged successively to form larger clusters.
no code implementations • 27 Sep 2016 • Chung Chan, Ali Al-Bashabsheh, Qiaoqiao Zhou, Tie Liu
The feature-selection problem is formulated from an information-theoretic perspective.