no code implementations • Findings (EMNLP) 2021 • Yekun Chai, Haidong Zhang, Qiyue Yin, Junge Zhang
Generative Adversarial Networks (GANs) have achieved great success in image synthesis, but have proven to be difficult to generate natural language.
no code implementations • 21 Sep 2023 • Zhourui Guo, Meng Yao, Yang Yu, Qiyue Yin
We assume that the interaction can be modeled as a sequence of templated questions and answers, and that there is a large corpus of previous interactions available.
no code implementations • 23 Feb 2023 • Yekun Chai, Qiyue Yin, Junge Zhang
In this work, we (1) first empirically show that the mixture-of-experts approach is able to enhance the representation capacity of the generator for language GANs and (2) harness the Feature Statistics Alignment (FSA) paradigm to render fine-grained learning signals to advance the generator training.
no code implementations • 1 Dec 2022 • Qiyue Yin, Tongtong Yu, Shengqi Shen, Jun Yang, Meijing Zhao, Kaiqi Huang, Bin Liang, Liang Wang
With the breakthrough of AlphaGo, deep reinforcement learning becomes a recognized technique for solving sequential decision-making problems.
no code implementations • 2 Jun 2022 • Hao Chen, Guangkai Yang, Junge Zhang, Qiyue Yin, Kaiqi Huang
Specifically, these methods do not explicitly utilize the relationship between agents and cannot adapt to different sizes of inputs.
no code implementations • 15 Nov 2021 • Qiyue Yin, Jun Yang, Kaiqi Huang, Meijing Zhao, Wancheng Ni, Bin Liang, Yan Huang, Shu Wu, Liang Wang
Through this survey, we 1) compare the main difficulties among different kinds of games and the corresponding techniques utilized for achieving professional human level AIs; 2) summarize the mainstream frameworks and techniques that can be properly relied on for developing AIs for complex human-computer gaming; 3) raise the challenges or drawbacks of current techniques in the successful AIs; and 4) try to point out future trends in human-computer gaming AIs.
no code implementations • 9 Apr 2021 • Wenzhen Huang, Qiyue Yin, Junge Zhang, Kaiqi Huang
More specifically, we evaluate the effect of an imaginary transition by calculating the change of the loss computed on the real samples when we use the transition to train the action-value and policy functions.
Model-based Reinforcement Learning reinforcement-learning +1
no code implementations • 1 Jan 2021 • Yekun Chai, Qiyue Yin, Junge Zhang
Generative Adversarial Networks (GAN) are facing great challenges in synthesizing sequences of discrete elements, such as mode dropping and unstable training.
no code implementations • 24 Oct 2020 • Xiyao Wang, Junge Zhang, Wenzhen Huang, Qiyue Yin
We give an upper bound of the trajectory reward estimation error and point out that increasing the agent's exploration ability is the key to reduce trajectory reward estimation error, thereby alleviating dynamics bottleneck dilemma.
1 code implementation • 3 Feb 2020 • Peng Xu, Zeyu Song, Qiyue Yin, Yi-Zhe Song, Liang Wang
In this paper, we tackle for the first time, the problem of self-supervised representation learning for free-hand sketches.
2 code implementations • 8 Jan 2020 • Peng Xu, Timothy M. Hospedales, Qiyue Yin, Yi-Zhe Song, Tao Xiang, Liang Wang
Free-hand sketches are highly illustrative, and have been widely used by humans to depict objects or stories from ancient times to the present.
no code implementations • 28 May 2017 • Peng Xu, Qiyue Yin, Yongye Huang, Yi-Zhe Song, Zhanyu Ma, Liang Wang, Tao Xiang, W. Bastiaan Kleijn, Jun Guo
Sketch-based image retrieval (SBIR) is challenging due to the inherent domain-gap between sketch and photo.
Ranked #5 on Sketch-Based Image Retrieval on Chairs
no code implementations • 21 Jul 2016 • Kaiye Wang, Qiyue Yin, Wei Wang, Shu Wu, Liang Wang
To speed up the cross-modal retrieval, a number of binary representation learning methods are proposed to map different modalities of data into a common Hamming space.
no code implementations • 28 Nov 2014 • Ran He, Man Zhang, Liang Wang, Ye Ji, Qiyue Yin
For unsupervised learning, we propose a cross-modal subspace clustering method to learn a common structure for different modalities.