Search Results for author: Yuanzhe Zhang

Found 19 papers, 7 papers with code

Biomedical Concept Normalization by Leveraging Hypernyms

1 code implementation EMNLP 2021 Cheng Yan, Yuanzhe Zhang, Kang Liu, Jun Zhao, Yafei Shi, Shengping Liu

Biomedical Concept Normalization (BCN) is widely used in biomedical text processing as a fundamental module.

CMQA: A Dataset of Conditional Question Answering with Multiple-Span Answers

1 code implementation COLING 2022 Yiming Ju, Weikang Wang, Yuanzhe Zhang, Suncong Zheng, Kang Liu, Jun Zhao

To bridge the gap, we propose a new task: conditional question answering with hierarchical multi-span answers, where both the hierarchical relations and the conditions need to be extracted.

Question Answering

Scene Restoring for Narrative Machine Reading Comprehension

no code implementations EMNLP 2020 Zhixing Tian, Yuanzhe Zhang, Kang Liu, Jun Zhao, Yantao Jia, Zhicheng Sheng

Inspired by this behavior of humans, we propose a method to let the machine imagine a scene during reading narrative for better comprehension.

Cloze Test Machine Reading Comprehension +1

Imagination Augmented Generation: Learning to Imagine Richer Context for Question Answering over Large Language Models

1 code implementation22 Mar 2024 Huanxuan Liao, Shizhu He, Yao Xu, Yuanzhe Zhang, Kang Liu, Shengping Liu, Jun Zhao

Retrieval-Augmented-Generation and Gener-ation-Augmented-Generation have been proposed to enhance the knowledge required for question answering over Large Language Models (LLMs).

Open-Domain Question Answering

Generative Calibration for In-context Learning

1 code implementation16 Oct 2023 Zhongtao Jiang, Yuanzhe Zhang, Cao Liu, Jun Zhao, Kang Liu

In this paper, we for the first time theoretically and empirically identify that such a paradox is mainly due to the label shift of the in-context model to the data distribution, in which LLMs shift the label marginal $p(y)$ while having a good label conditional $p(x|y)$.

In-Context Learning text-classification +1

MenatQA: A New Dataset for Testing the Temporal Comprehension and Reasoning Abilities of Large Language Models

1 code implementation8 Oct 2023 Yifan Wei, Yisong Su, Huanhuan Ma, Xiaoyan Yu, Fangyu Lei, Yuanzhe Zhang, Jun Zhao, Kang Liu

As a result, it is natural for people to believe that LLMs have also mastered abilities such as time understanding and reasoning.

counterfactual

Interpreting Sentiment Composition with Latent Semantic Tree

1 code implementation31 Aug 2023 Zhongtao Jiang, Yuanzhe Zhang, Cao Liu, Jiansong Chen, Jun Zhao, Kang Liu

As the key to sentiment analysis, sentiment composition considers the classification of a constituent via classifications of its contained sub-constituents and rules operated on them.

Classification Domain Adaptation +1

Unsupervised Text Style Transfer with Deep Generative Models

no code implementations31 Aug 2023 Zhongtao Jiang, Yuanzhe Zhang, Yiming Ju, Kang Liu

We present a general framework for unsupervised text style transfer with deep generative models.

Sentence Style Transfer +2

Multi-View Graph Representation Learning for Answering Hybrid Numerical Reasoning Question

1 code implementation5 May 2023 Yifan Wei, Fangyu Lei, Yuanzhe Zhang, Jun Zhao, Kang Liu

Hybrid question answering (HybridQA) over the financial report contains both textual and tabular data, and requires the model to select the appropriate evidence for the numerical reasoning task.

Graph Representation Learning Machine Reading Comprehension +1

Generating Hierarchical Explanations on Text Classification Without Connecting Rules

no code implementations24 Oct 2022 Yiming Ju, Yuanzhe Zhang, Kang Liu, Jun Zhao

The opaqueness of deep NLP models has motivated the development of methods for interpreting how deep models predict.

Clustering text-classification +1

Logic Traps in Evaluating Attribution Scores

no code implementations ACL 2022 Yiming Ju, Yuanzhe Zhang, Zhao Yang, Zhongtao Jiang, Kang Liu, Jun Zhao

Meanwhile, since the reasoning process of deep models is inaccessible, researchers design various evaluation methods to demonstrate their arguments.

Alignment Rationale for Natural Language Inference

no code implementations ACL 2021 Zhongtao Jiang, Yuanzhe Zhang, Zhao Yang, Jun Zhao, Kang Liu

Deep learning models have achieved great success on the task of Natural Language Inference (NLI), though only a few attempts try to explain their behaviors.

feature selection Natural Language Inference

Question Answering over Knowledge Base with Neural Attention Combining Global Knowledge Information

no code implementations3 Jun 2016 Yuanzhe Zhang, Kang Liu, Shizhu He, Guoliang Ji, Zhanyi Liu, Hua Wu, Jun Zhao

With the rapid growth of knowledge bases (KBs) on the web, how to take full advantage of them becomes increasingly important.

Question Answering

Cannot find the paper you are looking for? You can Submit a new open access paper.