no code implementations • 14 Feb 2021 • Shaowei Yao, Jiwei Tan, Xi Chen, Keping Yang, Rong Xiao, Hongbo Deng, Xiaojun Wan
We propose a novel way to consider samples of different relevance confidence, and come up with a new training objective to learn a robust relevance model with desirable score distribution.
no code implementations • 7 Mar 2020 • Xiang Li, Chao Wang, Jiwei Tan, Xiaoyi Zeng, Dan Ou, Bo Zheng
Finally, we achieve the multimodal item representations by combining both modality-specific and modality-invariant representations.
no code implementations • WS 2018 • Jianmin Zhang, Jiwei Tan, Xiaojun Wan
In this paper, we investigate neural abstractive methods for MDS by adapting a state-of-the-art neural abstractive summarization model for SDS.
no code implementations • ACL 2018 • Zhiwei Yu, Jiwei Tan, Xiaojun Wan
Since sequence-to-sequence models provide an effective technique for text generation, it is promising to investigate these models on the pun generation task.
no code implementations • 24 Apr 2018 • Jianmin Zhang, Jiwei Tan, Xiaojun Wan
In this paper, we investigate neural abstractive methods for MDS by adapting a state-of-the-art neural abstractive summarization model for SDS.
no code implementations • ACL 2017 • Jiwei Tan, Xiaojun Wan, Jianguo Xiao
Abstractive summarization is the ultimate goal of document summarization research, but previously it is less investigated due to the immaturity of text generation techniques.
Ranked #12 on Text Summarization on CNN / Daily Mail (Anonymized)