no code implementations • 23 May 2023 • Harman Singh, Pengchuan Zhang, Qifan Wang, Mengjiao Wang, Wenhan Xiong, Jingfei Du, Yu Chen
Along with this, we propose novel negative mining techniques in the scene graph space for improving attribute binding and relation understanding.
no code implementations • 23 May 2023 • Tsu-Jui Fu, Wenhan Xiong, Yixin Nie, Jingyu Liu, Barlas Oğuz, William Yang Wang
To address this \texttt{T3H} task, we propose Compositional Cross-modal Human (CCH).
Ranked #1 on
Text-to-3D-Human Generation
on SHHQ
no code implementations • 21 May 2023 • Yassir Fathullah, Chunyang Wu, Yuan Shangguan, Junteng Jia, Wenhan Xiong, Jay Mahadeokar, Chunxi Liu, Yangyang Shi, Ozlem Kalinli, Mike Seltzer, Mark J. F. Gales
State space models (SSMs) have recently shown promising results on small-scale sequence and language modelling tasks, rivalling and outperforming many attention-based approaches.
no code implementations • 4 May 2023 • Xilun Chen, Lili Yu, Wenhan Xiong, Barlas Oğuz, Yashar Mehdad, Wen-tau Yih
We propose a new two-stage pre-training framework for video-to-text generation tasks such as video captioning and video question answering: A generative encoder-decoder model is first jointly pre-trained on massive image-text data to learn fundamental vision-language concepts, and then adapted to video data in an intermediate video-text pre-training stage to learn video-specific skills such as spatio-temporal reasoning.
no code implementations • 9 Mar 2023 • Anchit Gupta, Wenhan Xiong, Yixin Nie, Ian Jones, Barlas Oğuz
We take another step along this direction, combining these developments in a two-step pipeline consisting of 1) a triplane VAE which can learn latent representations of textured meshes and 2) a conditional diffusion model which generates the triplane features.
no code implementations • 7 Mar 2023 • Jingyu Liu, Wenhan Xiong, Ian Jones, Yixin Nie, Anchit Gupta, Barlas Oğuz
Whether heuristic or learned, these methods ignore instance-level attributes of objects such as color and style, and as a result may produce visually less coherent scenes.
no code implementations • 25 Oct 2022 • Gyuwan Kim, Jinhyuk Lee, Barlas Oguz, Wenhan Xiong, Yizhe Zhang, Yashar Mehdad, William Yang Wang
Building dense retrievers requires a series of standard procedures, including training and validating neural models and creating indexes for efficient search.
1 code implementation • 21 Sep 2022 • Wenhan Xiong, Anchit Gupta, Shubham Toshniwal, Yashar Mehdad, Wen-tau Yih
We present an empirical study of adapting an existing pretrained text-to-text model for long-sequence inputs.
Ranked #1 on
Text Summarization
on QMSum
2 code implementations • 10 Jan 2022 • Uri Shaham, Elad Segal, Maor Ivgi, Avia Efrat, Ori Yoran, Adi Haviv, Ankit Gupta, Wenhan Xiong, Mor Geva, Jonathan Berant, Omer Levy
NLP benchmarks have largely focused on short texts, such as sentences and paragraphs, even though long texts comprise a considerable amount of natural language in the wild.
Ranked #8 on
Long-range modeling
on SCROLLS
no code implementations • NAACL 2022 • Patrick Lewis, Barlas Oğuz, Wenhan Xiong, Fabio Petroni, Wen-tau Yih, Sebastian Riedel
DrBoost is trained in stages: each component model is learned sequentially and specialized by focusing only on retrieval mistakes made by the current ensemble.
1 code implementation • NAACL 2022 • Wenhan Xiong, Barlas Oğuz, Anchit Gupta, Xilun Chen, Diana Liskovich, Omer Levy, Wen-tau Yih, Yashar Mehdad
Many NLP tasks require processing long contexts beyond the length limit of pretrained models.
no code implementations • 10 Dec 2021 • Tianyi Liu, Zuxuan Wu, Wenhan Xiong, Jingjing Chen, Yu-Gang Jiang
Our experiments show that there is a trade-off between understanding tasks and generation tasks while using the same model, and a feasible way to improve both tasks is to use more data.
1 code implementation • EMNLP (ACL) 2021 • Sharon Levy, Kevin Mo, Wenhan Xiong, William Yang Wang
In this work, we present such a system for the emergent domain of COVID-19.
1 code implementation • ACL 2021 • Liangming Pan, Wenhu Chen, Wenhan Xiong, Min-Yen Kan, William Yang Wang
However, for each new domain that requires fact verification, creating a dataset by manually writing claims and linking them to their supporting evidence is expensive.
1 code implementation • NAACL 2021 • Liangming Pan, Wenhu Chen, Wenhan Xiong, Min-Yen Kan, William Yang Wang
Obtaining training data for multi-hop question answering (QA) is time-consuming and resource-intensive.
1 code implementation • ICLR 2021 • Wenhan Xiong, Xiang Lorraine Li, Srini Iyer, Jingfei Du, Patrick Lewis, William Yang Wang, Yashar Mehdad, Wen-tau Yih, Sebastian Riedel, Douwe Kiela, Barlas Oğuz
We propose a simple and efficient multi-hop dense retrieval approach for answering complex open-domain questions, which achieves state-of-the-art performance on two multi-hop datasets, HotpotQA and multi-evidence FEVER.
Ranked #15 on
Question Answering
on HotpotQA
1 code implementation • EACL 2021 • Wenhan Xiong, Hong Wang, William Yang Wang
In this work, we propose a simple and resource-efficient method to pretrain the paragraph encoder.
2 code implementations • Findings of the Association for Computational Linguistics 2020 • Wenhu Chen, Hanwen Zha, Zhiyu Chen, Wenhan Xiong, Hong Wang, William Wang
3) a hybrid model that combines heterogeneous information to find the answer.
Ranked #4 on
Question Answering
on HybridQA
no code implementations • 6 Apr 2020 • Yufei Feng, Mo Yu, Wenhan Xiong, Xiaoxiao Guo, Jun-Jie Huang, Shiyu Chang, Murray Campbell, Michael Greenspan, Xiaodan Zhu
We propose the new problem of learning to recover reasoning chains from weakly supervised signals, i. e., the question-answer pairs.
no code implementations • ICLR 2020 • Wenhan Xiong, Jingfei Du, William Yang Wang, Veselin Stoyanov
Models trained with our new objective yield significant improvements on the fact completion task.
1 code implementation • WS 2019 • Haoyu Wang, Mo Yu, Xiaoxiao Guo, Rajarshi Das, Wenhan Xiong, Tian Gao
General Question Answering (QA) systems over texts require the multi-hop reasoning capability, i. e. the ability to reason with information collected from multiple passages to derive the answer.
no code implementations • WS 2019 • Wenhan Xiong, Mo Yu, Xiaoxiao Guo, Hong Wang, Shiyu Chang, Murray Campbell, William Yang Wang
To resolve this issue, we introduce a new sub-problem of open-domain multi-hop QA, which aims to recognize the bridge (\emph{i. e.}, the anchor that links to the answer passage) from the context of a set of start passages with a reading comprehension model.
1 code implementation • 13 Sep 2019 • Mengdi Zhu, Zheye Deng, Wenhan Xiong, Mo Yu, Ming Zhang, William Yang Wang
In this work, to address the low precision and recall problems, we first utilize DBpedia as the source of distant supervision to annotate abstracts from Wikipedia and design a neural correction model trained with a human-annotated NER dataset, DocRED, to correct the false entity labels.
no code implementations • IJCNLP 2019 • Jiawei Wu, Wenhan Xiong, William Yang Wang
Many tasks in natural language processing can be viewed as multi-label classification problems.
no code implementations • 13 Aug 2019 • Hong Wang, Wenhan Xiong, Mo Yu, Xiaoxiao Guo, Shiyu Chang, William Yang Wang
The ability to reason over learned knowledge is an innate ability for humans and humans can easily master new reasoning rules with only a few demonstrations.
no code implementations • ACL 2019 • Wenhan Xiong, Jiawei Wu, Hong Wang, Vivek Kulkarni, Mo Yu, Shiyu Chang, Xiaoxiao Guo, William Yang Wang
With social media becoming increasingly pop-ular on which lots of news and real-time eventsare reported, developing automated questionanswering systems is critical to the effective-ness of many applications that rely on real-time knowledge.
2 code implementations • ACL 2019 • Hong Wang, Xin Wang, Wenhan Xiong, Mo Yu, Xiaoxiao Guo, Shiyu Chang, William Yang Wang
Existing models for extractive summarization are usually trained from scratch with a cross-entropy loss, which does not explicitly capture the global context at the document level.
2 code implementations • ACL 2019 • Wenhan Xiong, Mo Yu, Shiyu Chang, Xiaoxiao Guo, William Yang Wang
We propose a new end-to-end question answering model, which learns to aggregate answer evidence from an incomplete knowledge base (KB) and a set of retrieved text snippets.
2 code implementations • NAACL 2019 • Hong Wang, Wenhan Xiong, Mo Yu, Xiaoxiao Guo, Shiyu Chang, William Yang Wang
We formulate such a challenging problem as lifelong relation extraction and investigate memory-efficient incremental learning methods without catastrophically forgetting knowledge learned from previous tasks.
1 code implementation • NAACL 2019 • Wenhan Xiong, Jiawei Wu, Deren Lei, Mo Yu, Shiyu Chang, Xiaoxiao Guo, William Yang Wang
Existing entity typing systems usually exploit the type hierarchy provided by knowledge base (KB) schema to model label correlations and thus improve the overall performance.
Ranked #3 on
Entity Typing
on Ontonotes v5 (English)
1 code implementation • 3 Nov 2018 • Sharon Levy, Wenhan Xiong, Elizabeth Belding, William Yang Wang
We propose SafeRoute, a novel solution to the problem of navigating cities and avoiding street harassment and crime.
1 code implementation • EMNLP 2018 • Wenhan Xiong, Mo Yu, Shiyu Chang, Xiaoxiao Guo, William Yang Wang
Knowledge graphs (KGs) are the key components of various natural language processing applications.
3 code implementations • 16 Jun 2018 • Wenhan Xiong, Xiaoxiao Guo, Mo Yu, Shiyu Chang, Bo-Wen Zhou, William Yang Wang
We investigate the task of learning to follow natural language instructions by jointly reasoning with visual observations and language inputs.
1 code implementation • ECCV 2018 • Xin Wang, Wenhan Xiong, Hongmin Wang, William Yang Wang
In this paper, we take a radical approach to bridge the gap between synthetic studies and real-world practices---We propose a novel, planned-ahead hybrid reinforcement learning model that combines model-free and model-based reinforcement learning to solve a real-world vision-language navigation task.
Model-based Reinforcement Learning
reinforcement-learning
+4
no code implementations • NAACL 2018 • Wenhu Chen, Wenhan Xiong, Xifeng Yan, William Wang
Inferring missing links in knowledge graphs (KG) has attracted a lot of attention from the research community.
2 code implementations • EMNLP 2017 • Wenhan Xiong, Thien Hoang, William Yang Wang
We study the problem of learning to reason in large scale knowledge graphs (KGs).
Ranked #1 on
Link Prediction
on NELL-995
(Mean AP metric)