Search Results for author: Yitian Li

Found 12 papers, 0 papers with code

To What Extent Do Natural Language Understanding Datasets Correlate to Logical Reasoning? A Method for Diagnosing Logical Reasoning.

no code implementations COLING 2022 Yitian Li, Jidong Tian, Wenqing Chen, Caoyun Fan, Hao He, Yaohui Jin

In this paper, we propose a systematic method to diagnose the correlations between an NLU dataset and a specific skill, and then take a fundamental reasoning skill, logical reasoning, as an example for analysis.

Logical Reasoning Machine Reading Comprehension +2

Diagnosing the First-Order Logical Reasoning Ability Through LogicNLI

no code implementations EMNLP 2021 Jidong Tian, Yitian Li, Wenqing Chen, Liqiang Xiao, Hao He, Yaohui Jin

Recently, language models (LMs) have achieved significant performance on many NLU tasks, which has spurred widespread interest for their possible applications in the scientific and social area.

Logical Reasoning

Hypothesis Testing Prompting Improves Deductive Reasoning in Large Language Models

no code implementations9 May 2024 Yitian Li, Jidong Tian, Hao He, Yaohui Jin

Combining different forms of prompts with pre-trained large language models has yielded remarkable results on reasoning tasks (e. g. Chain-of-Thought prompting).

Fact Verification

Logical Negation Augmenting and Debiasing for Prompt-based Methods

no code implementations8 May 2024 Yitian Li, Jidong Tian, Hao He, Yaohui Jin

To solve the problem, we propose a simple but effective method, Negation Augmenting and Negation Debiasing (NAND), which introduces negative propositions to prompt-based methods without updating parameters.

Logical Reasoning Negation

Comparable Demonstrations are Important in In-Context Learning: A Novel Perspective on Demonstration Selection

no code implementations12 Dec 2023 Caoyun Fan, Jidong Tian, Yitian Li, Hao He, Yaohui Jin

In-Context Learning (ICL) is an important paradigm for adapting Large Language Models (LLMs) to downstream tasks through a few demonstrations.

In-Context Learning

Chain-of-Thought Tuning: Masked Language Models can also Think Step By Step in Natural Language Understanding

no code implementations18 Oct 2023 Caoyun Fan, Jidong Tian, Yitian Li, Wenqing Chen, Hao He, Yaohui Jin

From the perspective of CoT, CoTT's two-step framework enables MLMs to implement task decomposition; CoTT's prompt tuning allows intermediate steps to be used in natural language form.

Natural Language Understanding Relation Extraction

Accurate Use of Label Dependency in Multi-Label Text Classification Through the Lens of Causality

no code implementations11 Oct 2023 Caoyun Fan, Wenqing Chen, Jidong Tian, Yitian Li, Hao He, Yaohui Jin

In this study, we attribute the bias to the model's misuse of label dependency, i. e., the model tends to utilize the correlation shortcut in label dependency rather than fusing text information and label dependency for prediction.

Attribute Causal Inference +4

Unlock the Potential of Counterfactually-Augmented Data in Out-Of-Distribution Generalization

no code implementations10 Oct 2023 Caoyun Fan, Wenqing Chen, Jidong Tian, Yitian Li, Hao He, Yaohui Jin

Counterfactually-Augmented Data (CAD) -- minimal editing of sentences to flip the corresponding labels -- has the potential to improve the Out-Of-Distribution (OOD) generalization capability of language models, as CAD induces language models to exploit domain-independent causal features and exclude spurious correlations.

Attribute Natural Language Inference +3

MaxGNR: A Dynamic Weight Strategy via Maximizing Gradient-to-Noise Ratio for Multi-Task Learning

no code implementations18 Feb 2023 Caoyun Fan, Wenqing Chen, Jidong Tian, Yitian Li, Hao He, Yaohui Jin

A series of studies point out that too much gradient noise would lead to performance degradation in STL, however, in the MTL scenario, Inter-Task Gradient Noise (ITGN) is an additional source of gradient noise for each task, which can also affect the optimization process.

Multi-Task Learning

Improving the Out-Of-Distribution Generalization Capability of Language Models: Counterfactually-Augmented Data is not Enough

no code implementations18 Feb 2023 Caoyun Fan, Wenqing Chen, Jidong Tian, Yitian Li, Hao He, Yaohui Jin

Counterfactually-Augmented Data (CAD) has the potential to improve language models' Out-Of-Distribution (OOD) generalization capability, as CAD induces language models to exploit causal features and exclude spurious correlations.

Attribute Natural Language Inference +2

De-Confounded Variational Encoder-Decoder for Logical Table-to-Text Generation

no code implementations ACL 2021 Wenqing Chen, Jidong Tian, Yitian Li, Hao He, Yaohui Jin

The task remains challenging where deep learning models often generated linguistically fluent but logically inconsistent text.

Decoder Sentence +2

Angular Triplet Loss-based Camera Network for ReID

no code implementations12 May 2020 Yitian Li, Ruini Xue, Mengmeng Zhu, Jing Xu, Zenglin Xu

Many complex network structures are proposed recently and many of them concentrate on multi-branch features to achieve high performance.

Person Re-Identification Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.