no code implementations • 26 Nov 2022 • Wenbin Li, Meihao Kong, Xuesong Yang, Lei Wang, Jing Huo, Yang Gao, Jiebo Luo
In this study, we present a new unified contrastive learning representation framework (named UniCLR) suitable for all the above four kinds of methods from a novel perspective of basic affinity matrix.
no code implementations • 25 Mar 2022 • Meihao Kong, Jing Huo, Wenbin Li, Jing Wu, Yu-Kun Lai, Yang Gao
(2) Using iterative magnitude pruning, we find the matching subnetworks at 89. 2% sparsity in AdaIN and 73. 7% sparsity in SANet, which demonstrates that style transfer models can play lottery tickets too.
1 code implementation • 22 Jul 2021 • Wenbin Li, Xuesong Yang, Meihao Kong, Lei Wang, Jing Huo, Yang Gao, Jiebo Luo
However, in small data regimes, we can not obtain a sufficient number of negative pairs or effectively avoid the over-fitting problem when negatives are not used at all.