Search Results for author: Peng Qi

Found 32 papers, 18 papers with code

Entity and Evidence Guided Document-Level Relation Extraction

no code implementations ACL (RepL4NLP) 2021 Kevin Huang, Peng Qi, Guangtao Wang, Tengyu Ma, Jing Huang

In this paper, we propose a novel framework E2GRE (Entity and Evidence Guided Relation Extraction) that jointly extracts relations and the underlying evidence sentences by using large pretrained language model (LM) as input encoder.

Document-level Relation Extraction Language Modelling

Bad Actor, Good Advisor: Exploring the Role of Large Language Models in Fake News Detection

1 code implementation21 Sep 2023 Beizhe Hu, Qiang Sheng, Juan Cao, Yuhui Shi, Yang Li, Danding Wang, Peng Qi

To instantiate this proposal, we design an adaptive rationale guidance network for fake news detection (ARG), in which SLMs selectively acquire insights on news analysis from the LLMs' rationales.

Fake News Detection

A Miniaturised Camera-based Multi-Modal Tactile Sensor

no code implementations6 Mar 2023 Kaspar Althoefer, Yonggen Ling, Wanlin Li, Xinyuan Qian, Wang Wei Lee, Peng Qi

The human tactile system is composed of various types of mechanoreceptors, each able to perceive and process distinct information such as force, pressure, texture, etc.

Combating Online Misinformation Videos: Characterization, Detection, and Future Directions

2 code implementations7 Feb 2023 Yuyan Bu, Qiang Sheng, Juan Cao, Peng Qi, Danding Wang, Jintao Li

With information consumption via online video streaming becoming increasingly popular, misinformation video poses a new threat to the health of the online information ecosystem.

Misinformation Recommendation Systems +1

Tokenization Consistency Matters for Generative Models on Extractive NLP Tasks

1 code implementation19 Dec 2022 Kaiser Sun, Peng Qi, Yuhao Zhang, Lan Liu, William Yang Wang, Zhiheng Huang

We show that, with consistent tokenization, the model performs better in both in-domain and out-of-domain datasets, with a notable average of +1. 7 F2 gain when a BART model is trained on SQuAD and evaluated on 8 QA datasets.

Extractive Question-Answering Question Answering

Challenges in Explanation Quality Evaluation

no code implementations13 Oct 2022 Hendrik Schuff, Heike Adel, Peng Qi, Ngoc Thang Vu

This approach assumes that explanations which reach higher proxy scores will also provide a greater benefit to human users.

Question Answering

SpanDrop: Simple and Effective Counterfactual Learning for Long Sequences

no code implementations3 Aug 2022 Peng Qi, Guangtao Wang, Jing Huang

Distilling supervision signal from a long sequence to make predictions is a challenging task in machine learning, especially when not all elements in the input sequence contribute equally to the desired output.

counterfactual Data Augmentation

DRAG: Dynamic Region-Aware GCN for Privacy-Leaking Image Detection

1 code implementation17 Mar 2022 Guang Yang, Juan Cao, Qiang Sheng, Peng Qi, Xirong Li, Jintao Li

However, these methods have two limitations: 1) they neglect other important elements like scenes, textures, and objects beyond the capacity of pretrained object detectors; 2) the correlation among objects is fixed, but a fixed correlation is not appropriate for all the images.

Improving Time Sensitivity for Question Answering over Temporal Knowledge Graphs

no code implementations ACL 2022 Chao Shang, Guangtao Wang, Peng Qi, Jing Huang

These questions often involve three time-related challenges that previous work fail to adequately address: 1) questions often do not specify exact timestamps of interest (e. g., "Obama" instead of 2000); 2) subtle lexical differences in time relations (e. g., "before" vs "after"); 3) off-the-shelf temporal KG embeddings that previous work builds on ignore the temporal order of timestamps, which is crucial for answering temporal-order related questions.

Knowledge Graphs Question Answering

Trustworthy AI: From Principles to Practices

no code implementations4 Oct 2021 Bo Li, Peng Qi, Bo Liu, Shuai Di, Jingen Liu, JiQuan Pei, JinFeng Yi, BoWen Zhou

In this review, we provide AI practitioners with a comprehensive guide for building trustworthy AI systems.


Open Temporal Relation Extraction for Question Answering

no code implementations AKBC 2021 Chao Shang, Peng Qi, Guangtao Wang, Jing Huang, Youzheng Wu, BoWen Zhou

Understanding the temporal relations among events in text is a critical aspect of reading comprehension, which can be evaluated in the form of temporal question answering (TQA).

Question Answering Reading Comprehension +1

VeniBot: Towards Autonomous Venipuncture with Automatic Puncture Area and Angle Regression from NIR Images

no code implementations27 May 2021 Xu Cao, Zijie Chen, Bolin Lai, Yuxuan Wang, Yu Chen, Zhengqing Cao, Zhilin Yang, Nanyang Ye, Junbo Zhao, Xiao-Yun Zhou, Peng Qi

For the automation, we focus on the positioning part and propose a Dual-In-Dual-Out network based on two-step learning and two-task learning, which can achieve fully automatic regression of the suitable puncture area and angle from near-infrared(NIR) images.

Navigate regression

Conversational AI Systems for Social Good: Opportunities and Challenges

no code implementations13 May 2021 Peng Qi, Jing Huang, Youzheng Wu, Xiaodong He, BoWen Zhou

Conversational artificial intelligence (ConvAI) systems have attracted much academic and commercial attention recently, making significant progress on both fronts.

Graph Ensemble Learning over Multiple Dependency Trees for Aspect-level Sentiment Classification

no code implementations NAACL 2021 Xiaochen Hou, Peng Qi, Guangtao Wang, Rex Ying, Jing Huang, Xiaodong He, BoWen Zhou

Recent work on aspect-level sentiment classification has demonstrated the efficacy of incorporating syntactic structures such as dependency trees with graph neural networks(GNN), but these approaches are usually vulnerable to parsing errors.

Ensemble Learning General Classification +2

Neural Generation Meets Real People: Towards Emotionally Engaging Mixed-Initiative Conversations

no code implementations27 Aug 2020 Ashwin Paranjape, Abigail See, Kathleen Kenealy, Haojun Li, Amelia Hardy, Peng Qi, Kaushik Ram Sadagopan, Nguyet Minh Phu, Dilara Soylu, Christopher D. Manning

At the end of the competition, Chirpy Cardinal progressed to the finals with an average rating of 3. 6/5. 0, a median conversation duration of 2 minutes 16 seconds, and a 90th percentile duration of over 12 minutes.

World Knowledge

Do Syntax Trees Help Pre-trained Transformers Extract Information?

1 code implementation EACL 2021 Devendra Singh Sachan, Yuhao Zhang, Peng Qi, William Hamilton

Our empirical analysis demonstrates that these syntax-infused transformers obtain state-of-the-art results on SRL and relation extraction tasks.

named-entity-recognition Named Entity Recognition +3

Answering Complex Open-domain Questions Through Iterative Query Generation

1 code implementation IJCNLP 2019 Peng Qi, Xiaowen Lin, Leo Mehr, Zijian Wang, Christopher D. Manning

It is challenging for current one-step retrieve-and-read question answering (QA) systems to answer questions like "Which novel by the author of 'Armada' will be adapted as a feature film by Steven Spielberg?"

Information Retrieval Question Answering +1

Exploiting Multi-domain Visual Information for Fake News Detection

no code implementations13 Aug 2019 Peng Qi, Juan Cao, Tianyun Yang, Junbo Guo, Jintao Li

In the real world, fake-news images may have significantly different characteristics from real-news images at both physical and semantic levels, which can be clearly reflected in the frequency and pixel domain, respectively.

Fake News Detection

Sharp Nearby, Fuzzy Far Away: How Neural Language Models Use Context

1 code implementation ACL 2018 Urvashi Khandelwal, He He, Peng Qi, Dan Jurafsky

We know very little about how neural language models (LM) use prior linguistic context.

Stanford's Graph-based Neural Dependency Parser at the CoNLL 2017 Shared Task

no code implementations CONLL 2017 Timothy Dozat, Peng Qi, Christopher D. Manning

This paper describes the neural dependency parser submitted by Stanford to the CoNLL 2017 Shared Task on parsing Universal Dependencies.

Dependency Parsing

Arc-swift: A Novel Transition System for Dependency Parsing

1 code implementation ACL 2017 Peng Qi, Christopher D. Manning

Transition-based dependency parsers often need sequences of local shift and reduce operations to produce certain attachments.

Dependency Parsing

Building DNN Acoustic Models for Large Vocabulary Speech Recognition

1 code implementation30 Jun 2014 Andrew L. Maas, Peng Qi, Ziang Xie, Awni Y. Hannun, Christopher T. Lengerich, Daniel Jurafsky, Andrew Y. Ng

We compare standard DNNs to convolutional networks, and present the first experiments using locally-connected, untied neural networks for acoustic modeling.

speech-recognition Speech Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.