no code implementations • 10 Apr 2022 • Shunyu Zhang, Xiaoze Jiang, Zequn Yang, Tao Wan, Zengchang Qin
In our model, the external knowledge is represented with sentence-level facts and graph-level facts, to properly suit the scenario of the composite of dialog history and image.
no code implementations • 1 Nov 2018 • Daouda Sow, Zengchang Qin, Mouhamed Niasse, Tao Wan
The recent advances of deep learning in both computer vision (CV) and natural language processing (NLP) provide us a new way of understanding semantics, by which we can deal with more challenging tasks such as automatic description generation from natural images.
no code implementations • 1 Nov 2018 • Shuangting Liu, Jia-Qi Zhang, Yuxin Chen, Yifan Liu, Zengchang Qin, Tao Wan
Semantic segmentation is one of the basic topics in computer vision, it aims to assign semantic labels to every pixel of an image.
1 code implementation • 1 Dec 2017 • Heng Wang, Zengchang Qin, Tao Wan
We propose the VGAN model where the generative model is composed of recurrent neural network and VAE.
no code implementations • 9 May 2017 • Liang Li, Pengyu Li, Yifan Liu, Tao Wan, Zengchang Qin
Under our learning policy, the Seq2Seq model can learn mappings gradually with noises.
no code implementations • 8 May 2017 • Qiangeng Xu, Zengchang Qin, Tao Wan
In this paper, we explore a generative model for the task of generating unseen images with desired features.