1 code implementation • 24 Nov 2023 • Yige Yuan, Bingbing Xu, Liang Hou, Fei Sun, HuaWei Shen, Xueqi Cheng
To address this, we propose a novel energy-based perspective, enhancing the model's perception of target data distributions without requiring access to training data or processes.
1 code implementation • 25 May 2023 • Yige Yuan, Bingbing Xu, Bo Lin, Liang Hou, Fei Sun, HuaWei Shen, Xueqi Cheng
The generalization of neural networks is a central challenge in machine learning, especially concerning the performance under distributions that differ from training ones.
no code implementations • 20 Nov 2022 • Yige Yuan, Bingbing Xu, HuaWei Shen, Qi Cao, Keting Cen, Wen Zheng, Xueqi Cheng
Guided by the bound, we design a GCL framework named InfoAdv with enhanced generalization ability, which jointly optimizes the generalization metric and InfoMax to strike the right balance between pretext task fitting and the generalization ability on downstream tasks.
no code implementations • 16 Nov 2022 • Yang Li, Bingbing Xu, Qi Cao, Yige Yuan, HuaWei Shen
On account that previous studies either lacks variance analysis or only focus on a particular sampling paradigm, we firstly propose an unified node sampling variance analysis framework and analyze the core challenge "circular dependency" for deriving the minimum variance sampler, i. e., sampling probability depends on node embeddings while node embeddings can not be calculated until sampling is finished.
1 code implementation • NeurIPS 2023 • Liang Hou, Qi Cao, Yige Yuan, Songtao Zhao, Chongyang Ma, Siyuan Pan, Pengfei Wan, Zhongyuan Wang, HuaWei Shen, Xueqi Cheng
Training generative adversarial networks (GANs) with limited data is challenging because the discriminator is prone to overfitting.