no code implementations • 23 May 2023 • Renjie Pi, Jiahui Gao, Shizhe Diao, Rui Pan, Hanze Dong, Jipeng Zhang, Lewei Yao, Jianhua Han, Hang Xu, Lingpeng Kong, Tong Zhang
Overall, our proposed paradigm and DetGPT demonstrate the potential for more sophisticated and intuitive interactions between humans and machines.
1 code implementation • 13 Apr 2023 • Hanze Dong, Wei Xiong, Deepanshu Goyal, Rui Pan, Shizhe Diao, Jipeng Zhang, Kashun Shum, Tong Zhang
Prior research has primarily employed Reinforcement Learning from Human Feedback (RLHF) as a means of addressing this problem, wherein generative models are fine-tuned using RL algorithms guided by a human-feedback-informed reward model.
no code implementations • 2 Mar 2023 • Shihong Ding, Hanze Dong, Cong Fang, Zhouchen Lin, Tong Zhang
We consider the general nonconvex nonconcave minimax problem over continuous variables.
1 code implementation • 3 Jan 2023 • Yanwei Fu, Xiaomei Wang, Hanze Dong, Yu-Gang Jiang, Meng Wang, xiangyang xue, Leonid Sigal
Despite significant progress in object categorization, in recent years, a number of important challenges remain; mainly, the ability to learn from limited labeled data and to recognize object classes within large, potentially open, set of labels.
no code implementations • 25 Nov 2022 • Hanze Dong, Xi Wang, Yong Lin, Tong Zhang
With the popularity of Stein variational gradient descent (SVGD), the focus of particle-based VI algorithms has been on the properties of functions in Reproducing Kernel Hilbert Space (RKHS) to approximate the gradient flow.
1 code implementation • 21 Nov 2022 • Hanze Dong, Shizhe Diao, Weizhong Zhang, Tong Zhang
The resulting method is significantly more powerful than the standard normalization flow approach for generating data distributions with multiple modes.
no code implementations • 29 Sep 2022 • Songtao Liu, Rex Ying, Hanze Dong, Lu Lin, Jinghui Chen, Dinghao Wu
However, the analysis of implicit denoising effect in graph neural networks remains open.
no code implementations • CVPR 2022 • Yong Lin, Hanze Dong, Hao Wang, Tong Zhang
Generalization under distributional shift is an open challenge for machine learning.
1 code implementation • 8 Sep 2021 • Songtao Liu, Rex Ying, Hanze Dong, Lanqing Li, Tingyang Xu, Yu Rong, Peilin Zhao, Junzhou Huang, Dinghao Wu
To address this, we propose a simple and efficient data augmentation strategy, local augmentation, to learn the distribution of the node features of the neighbors conditioned on the central node's feature and enhance GNN's expressive power with generated features.
1 code implementation • 27 Dec 2020 • Cong Fang, Hanze Dong, Tong Zhang
Deep learning has received considerable empirical successes in recent years.
1 code implementation • 6 Oct 2020 • Xinwei Shen, Furui Liu, Hanze Dong, Qing Lian, Zhitang Chen, Tong Zhang
This paper proposes a Disentangled gEnerative cAusal Representation (DEAR) learning method under appropriate supervised information.
no code implementations • 11 Nov 2019 • Songtao Liu, Lingwei Chen, Hanze Dong, ZiHao Wang, Dinghao Wu, Zengfeng Huang
Graph Convolution Network (GCN) has been recognized as one of the most effective graph models for semi-supervised learning, but it extracts merely the first-order or few-order neighborhood information through information propagation, which suffers performance drop-off for deeper structure.
no code implementations • 25 Oct 2019 • Cong Fang, Hanze Dong, Tong Zhang
Recently, over-parameterized neural networks have been extensively analyzed in the literature.
no code implementations • ICLR 2019 • Hanze Dong, Yanwei Fu, Sung Ju Hwang, Leonid Sigal, xiangyang xue
This paper studies the problem of Generalized Zero-shot Learning (G-ZSL), whose goal is to classify instances belonging to both seen and unseen classes at the test time.
no code implementations • 28 May 2017 • Yanwei Fu, Hanze Dong, Yu-feng Ma, Zhengjun Zhang, xiangyang xue
To solve this problem, we propose the Extreme Value Learning (EVL) formulation to learn the mapping from visual feature to semantic space.