Search Results for author: Xiaohui Yan

Found 7 papers, 4 papers with code

PReGAN: Answer Oriented Passage Ranking with Weakly Supervised GAN

no code implementations5 Jul 2022 Pan Du, Jian-Yun Nie, Yutao Zhu, Hao Jiang, Lixin Zou, Xiaohui Yan

Beyond topical relevance, passage ranking for open-domain factoid question answering also requires a passage to contain an answer (answerability).

Passage Ranking Question Answering

Variance Reduction for Deep Q-Learning using Stochastic Recursive Gradient

no code implementations25 Jul 2020 Haonan Jia, Xiao Zhang, Jun Xu, Wei Zeng, Hao Jiang, Xiaohui Yan, Ji-Rong Wen

Deep Q-learning algorithms often suffer from poor gradient estimations with an excessive variance, resulting in unstable training and poor sampling efficiency.

Q-Learning reinforcement-learning +1

Convolutional Hierarchical Attention Network for Query-Focused Video Summarization

2 code implementations31 Jan 2020 Shuwen Xiao, Zhou Zhao, Zijian Zhang, Xiaohui Yan, Min Yang

This paper addresses the task of query-focused video summarization, which takes user's query and a long video as inputs and aims to generate a query-focused video summary.

Video Summarization

Generative Question Refinement with Deep Reinforcement Learning in Retrieval-based QA System

1 code implementation13 Aug 2019 Ye Liu, Chenwei Zhang, Xiaohui Yan, Yi Chang, Philip S. Yu

To improve the quality and retrieval performance of the generated questions, we make two major improvements: 1) To better encode the semantics of ill-formed questions, we enrich the representation of questions with character embedding and the recent proposed contextual word embedding such as BERT, besides the traditional context-free word embeddings; 2) To make it capable to generate desired questions, we train the model with deep reinforcement learning techniques that considers an appropriate wording of the generation as an immediate reward and the correlation between generated question and answer as time-delayed long-term rewards.

Question Answering reinforcement-learning +3

Abstract Meaning Representation for Paraphrase Detection

no code implementations NAACL 2018 Fuad Issa, Marco Damonte, Shay B. Cohen, Xiaohui Yan, Yi Chang

Abstract Meaning Representation (AMR) parsing aims at abstracting away from the syntactic realization of a sentence, and denote only its meaning in a canonical form.

AMR Parsing Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.