no code implementations • ACL (RepL4NLP) 2021 • Kevin Huang, Peng Qi, Guangtao Wang, Tengyu Ma, Jing Huang
In this paper, we propose a novel framework E2GRE (Entity and Evidence Guided Relation Extraction) that jointly extracts relations and the underlying evidence sentences by using large pretrained language model (LM) as input encoder.
1 code implementation • 21 Sep 2023 • Beizhe Hu, Qiang Sheng, Juan Cao, Yuhui Shi, Yang Li, Danding Wang, Peng Qi
To instantiate this proposal, we design an adaptive rationale guidance network for fake news detection (ARG), in which SLMs selectively acquire insights on news analysis from the LLMs' rationales.
no code implementations • 6 Mar 2023 • Kaspar Althoefer, Yonggen Ling, Wanlin Li, Xinyuan Qian, Wang Wei Lee, Peng Qi
The human tactile system is composed of various types of mechanoreceptors, each able to perceive and process distinct information such as force, pressure, texture, etc.
2 code implementations • 7 Feb 2023 • Yuyan Bu, Qiang Sheng, Juan Cao, Peng Qi, Danding Wang, Jintao Li
With information consumption via online video streaming becoming increasingly popular, misinformation video poses a new threat to the health of the online information ecosystem.
1 code implementation • 19 Dec 2022 • Kaiser Sun, Peng Qi, Yuhao Zhang, Lan Liu, William Yang Wang, Zhiheng Huang
We show that, with consistent tokenization, the model performs better in both in-domain and out-of-domain datasets, with a notable average of +1. 7 F2 gain when a BART model is trained on SQuAD and evaluated on 8 QA datasets.
no code implementations • 13 Oct 2022 • Hendrik Schuff, Heike Adel, Peng Qi, Ngoc Thang Vu
This approach assumes that explanations which reach higher proxy scores will also provide a greater benefit to human users.
1 code implementation • 12 Oct 2022 • Xiyang Hu, Xinchi Chen, Peng Qi, Deguang Kong, Kunlun Liu, William Yang Wang, Zhiheng Huang
Multilingual information retrieval (IR) is challenging since annotated training data is costly to obtain in many languages.
no code implementations • 3 Aug 2022 • Peng Qi, Guangtao Wang, Jing Huang
Distilling supervision signal from a long sequence to make predictions is a challenging task in machine learning, especially when not all elements in the input sequence contribute equally to the desired output.
1 code implementation • SIGDIAL (ACL) 2022 • Ethan A. Chi, Ashwin Paranjape, Abigail See, Caleb Chiam, Trenton Chang, Kathleen Kenealy, Swee Kiat Lim, Amelia Hardy, Chetanya Rastogi, Haojun Li, Alexander Iyabor, Yutong He, Hari Sowrirajan, Peng Qi, Kaushik Ram Sadagopan, Nguyet Minh Phu, Dilara Soylu, Jillian Tang, Avanika Narayan, Giovanni Campagna, Christopher D. Manning
We present Chirpy Cardinal, an open-domain social chatbot.
1 code implementation • 17 Mar 2022 • Guang Yang, Juan Cao, Qiang Sheng, Peng Qi, Xirong Li, Jintao Li
However, these methods have two limitations: 1) they neglect other important elements like scenes, textures, and objects beyond the capacity of pretrained object detectors; 2) the correlation among objects is fixed, but a fixed correlation is not appropriate for all the images.
no code implementations • ACL 2022 • Chao Shang, Guangtao Wang, Peng Qi, Jing Huang
These questions often involve three time-related challenges that previous work fail to adequately address: 1) questions often do not specify exact timestamps of interest (e. g., "Obama" instead of 2000); 2) subtle lexical differences in time relations (e. g., "before" vs "after"); 3) off-the-shelf temporal KG embeddings that previous work builds on ignore the temporal order of timestamps, which is crucial for answering temporal-order related questions.
Ranked #2 on
Question Answering
on CronQuestions
no code implementations • 4 Oct 2021 • Bo Li, Peng Qi, Bo Liu, Shuai Di, Jingen Liu, JiQuan Pei, JinFeng Yi, BoWen Zhou
In this review, we provide AI practitioners with a comprehensive guide for building trustworthy AI systems.
no code implementations • AKBC 2021 • Chao Shang, Peng Qi, Guangtao Wang, Jing Huang, Youzheng Wu, BoWen Zhou
Understanding the temporal relations among events in text is a critical aspect of reading comprehension, which can be evaluated in the form of temporal question answering (TQA).
no code implementations • 27 May 2021 • Xu Cao, Zijie Chen, Bolin Lai, Yuxuan Wang, Yu Chen, Zhengqing Cao, Zhilin Yang, Nanyang Ye, Junbo Zhao, Xiao-Yun Zhou, Peng Qi
For the automation, we focus on the positioning part and propose a Dual-In-Dual-Out network based on two-step learning and two-task learning, which can achieve fully automatic regression of the suitable puncture area and angle from near-infrared(NIR) images.
no code implementations • 27 May 2021 • Yu Chen, Yuxuan Wang, Bolin Lai, Zijie Chen, Xu Cao, Nanyang Ye, Zhongyuan Ren, Junbo Zhao, Xiao-Yun Zhou, Peng Qi
In the modern medical care, venipuncture is an indispensable procedure for both diagnosis and treatment.
no code implementations • 13 May 2021 • Peng Qi, Jing Huang, Youzheng Wu, Xiaodong He, BoWen Zhou
Conversational artificial intelligence (ConvAI) systems have attracted much academic and commercial attention recently, making significant progress on both fronts.
no code implementations • NAACL 2021 • Xiaochen Hou, Peng Qi, Guangtao Wang, Rex Ying, Jing Huang, Xiaodong He, BoWen Zhou
Recent work on aspect-level sentiment classification has demonstrated the efficacy of incorporating syntactic structures such as dependency trees with graph neural networks(GNN), but these approaches are usually vulnerable to parsing errors.
1 code implementation • EMNLP 2021 • Peng Qi, Haejun Lee, Oghenetegiri "TG" Sido, Christopher D. Manning
We develop a unified system to answer directly from text open-domain questions that may require a varying number of retrieval steps.
Ranked #10 on
Question Answering
on HotpotQA
no code implementations • 27 Aug 2020 • Ashwin Paranjape, Abigail See, Kathleen Kenealy, Haojun Li, Amelia Hardy, Peng Qi, Kaushik Ram Sadagopan, Nguyet Minh Phu, Dilara Soylu, Christopher D. Manning
At the end of the competition, Chirpy Cardinal progressed to the finals with an average rating of 3. 6/5. 0, a median conversation duration of 2 minutes 16 seconds, and a 90th percentile duration of over 12 minutes.
1 code implementation • EACL 2021 • Devendra Singh Sachan, Yuhao Zhang, Peng Qi, William Hamilton
Our empirical analysis demonstrates that these syntax-infused transformers obtain state-of-the-art results on SRL and relation extraction tasks.
5 code implementations • 29 Jul 2020 • Yuhao Zhang, Yuhui Zhang, Peng Qi, Christopher D. Manning, Curtis P. Langlotz
We introduce biomedical and clinical English model packages for the Stanza Python NLP library.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Peng Qi, Yuhao Zhang, Christopher D. Manning
We investigate the problem of generating informative questions in information-asymmetric conversations.
5 code implementations • ACL 2020 • Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, Christopher D. Manning
We introduce Stanza, an open-source Python natural language processing toolkit supporting 66 human languages.
1 code implementation • IJCNLP 2019 • Peng Qi, Xiaowen Lin, Leo Mehr, Zijian Wang, Christopher D. Manning
It is challenging for current one-step retrieve-and-read question answering (QA) systems to answer questions like "Which novel by the author of 'Armada' will be adapted as a feature film by Steven Spielberg?"
Ranked #51 on
Question Answering
on HotpotQA
no code implementations • 13 Aug 2019 • Peng Qi, Juan Cao, Tianyun Yang, Junbo Guo, Jintao Li
In the real world, fake-news images may have significantly different characteristics from real-news images at both physical and semantic levels, which can be clearly reflected in the frequency and pixel domain, respectively.
1 code implementation • CONLL 2018 • Peng Qi, Timothy Dozat, Yuhao Zhang, Christopher D. Manning
This paper describes Stanford's system at the CoNLL 2018 UD Shared Task.
Ranked #4 on
Dependency Parsing
on Universal Dependencies
1 code implementation • EMNLP 2018 • Yuhao Zhang, Peng Qi, Christopher D. Manning
Dependency trees help relation extraction models capture long-range relations between words.
Ranked #6 on
Relation Extraction
on Re-TACRED
4 code implementations • EMNLP 2018 • Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W. Cohen, Ruslan Salakhutdinov, Christopher D. Manning
Existing question answering (QA) datasets fail to train QA systems to perform complex reasoning and provide explanations for answers.
Ranked #36 on
Question Answering
on HotpotQA
1 code implementation • ACL 2018 • Urvashi Khandelwal, He He, Peng Qi, Dan Jurafsky
We know very little about how neural language models (LM) use prior linguistic context.
no code implementations • CONLL 2017 • Timothy Dozat, Peng Qi, Christopher D. Manning
This paper describes the neural dependency parser submitted by Stanford to the CoNLL 2017 Shared Task on parsing Universal Dependencies.
1 code implementation • ACL 2017 • Peng Qi, Christopher D. Manning
Transition-based dependency parsers often need sequences of local shift and reduce operations to produce certain attachments.
1 code implementation • 30 Jun 2014 • Andrew L. Maas, Peng Qi, Ziang Xie, Awni Y. Hannun, Christopher T. Lengerich, Daniel Jurafsky, Andrew Y. Ng
We compare standard DNNs to convolutional networks, and present the first experiments using locally-connected, untied neural networks for acoustic modeling.
Ranked #11 on
Speech Recognition
on swb_hub_500 WER fullSWBCH