Search Results for author: Jidong Tian

Found 15 papers, 1 papers with code

To What Extent Do Natural Language Understanding Datasets Correlate to Logical Reasoning? A Method for Diagnosing Logical Reasoning.

no code implementations COLING 2022 Yitian Li, Jidong Tian, Wenqing Chen, Caoyun Fan, Hao He, Yaohui Jin

In this paper, we propose a systematic method to diagnose the correlations between an NLU dataset and a specific skill, and then take a fundamental reasoning skill, logical reasoning, as an example for analysis.

Logical Reasoning Machine Reading Comprehension +2

Exploring Logically Dependent Multi-task Learning with Causal Inference

no code implementations EMNLP 2020 Wenqing Chen, Jidong Tian, Liqiang Xiao, Hao He, Yaohui Jin

In the field of causal inference, GS in our model is essentially a counterfactual reasoning process, trying to estimate the causal effect between tasks and utilize it to improve MTL.

Causal Inference counterfactual +2

Diagnosing the First-Order Logical Reasoning Ability Through LogicNLI

no code implementations EMNLP 2021 Jidong Tian, Yitian Li, Wenqing Chen, Liqiang Xiao, Hao He, Yaohui Jin

Recently, language models (LMs) have achieved significant performance on many NLU tasks, which has spurred widespread interest for their possible applications in the scientific and social area.

Logical Reasoning

Modeling Complex Mathematical Reasoning via Large Language Model based MathAgent

1 code implementation14 Dec 2023 Haoran Liao, Qinyi Du, Shaohua Hu, Hao He, Yanyan Xu, Jidong Tian, Yaohui Jin

Large language models (LLMs) face challenges in solving complex mathematical problems that require comprehensive capacities to parse the statements, associate domain knowledge, perform compound logical reasoning, and integrate the intermediate rationales.

Language Modelling Large Language Model +3

Comparable Demonstrations are Important in In-Context Learning: A Novel Perspective on Demonstration Selection

no code implementations12 Dec 2023 Caoyun Fan, Jidong Tian, Yitian Li, Hao He, Yaohui Jin

In-Context Learning (ICL) is an important paradigm for adapting Large Language Models (LLMs) to downstream tasks through a few demonstrations.

In-Context Learning

Chain-of-Thought Tuning: Masked Language Models can also Think Step By Step in Natural Language Understanding

no code implementations18 Oct 2023 Caoyun Fan, Jidong Tian, Yitian Li, Wenqing Chen, Hao He, Yaohui Jin

From the perspective of CoT, CoTT's two-step framework enables MLMs to implement task decomposition; CoTT's prompt tuning allows intermediate steps to be used in natural language form.

Natural Language Understanding Relation Extraction

Accurate Use of Label Dependency in Multi-Label Text Classification Through the Lens of Causality

no code implementations11 Oct 2023 Caoyun Fan, Wenqing Chen, Jidong Tian, Yitian Li, Hao He, Yaohui Jin

In this study, we attribute the bias to the model's misuse of label dependency, i. e., the model tends to utilize the correlation shortcut in label dependency rather than fusing text information and label dependency for prediction.

Attribute Causal Inference +4

Unlock the Potential of Counterfactually-Augmented Data in Out-Of-Distribution Generalization

no code implementations10 Oct 2023 Caoyun Fan, Wenqing Chen, Jidong Tian, Yitian Li, Hao He, Yaohui Jin

Counterfactually-Augmented Data (CAD) -- minimal editing of sentences to flip the corresponding labels -- has the potential to improve the Out-Of-Distribution (OOD) generalization capability of language models, as CAD induces language models to exploit domain-independent causal features and exclude spurious correlations.

Attribute Natural Language Inference +3

MaxGNR: A Dynamic Weight Strategy via Maximizing Gradient-to-Noise Ratio for Multi-Task Learning

no code implementations18 Feb 2023 Caoyun Fan, Wenqing Chen, Jidong Tian, Yitian Li, Hao He, Yaohui Jin

A series of studies point out that too much gradient noise would lead to performance degradation in STL, however, in the MTL scenario, Inter-Task Gradient Noise (ITGN) is an additional source of gradient noise for each task, which can also affect the optimization process.

Multi-Task Learning

Improving the Out-Of-Distribution Generalization Capability of Language Models: Counterfactually-Augmented Data is not Enough

no code implementations18 Feb 2023 Caoyun Fan, Wenqing Chen, Jidong Tian, Yitian Li, Hao He, Yaohui Jin

Counterfactually-Augmented Data (CAD) has the potential to improve language models' Out-Of-Distribution (OOD) generalization capability, as CAD induces language models to exploit causal features and exclude spurious correlations.

Attribute Natural Language Inference +2

Dependent Multi-Task Learning with Causal Intervention for Image Captioning

no code implementations18 May 2021 Wenqing Chen, Jidong Tian, Caoyun Fan, Hao He, Yaohui Jin

The intermediate task would help the model better understand the visual features and thus alleviate the content inconsistency problem.

Image Captioning Multi-agent Reinforcement Learning +1

A Semantically Consistent and Syntactically Variational Encoder-Decoder Framework for Paraphrase Generation

no code implementations COLING 2020 Wenqing Chen, Jidong Tian, Liqiang Xiao, Hao He, Yaohui Jin

In this paper, we propose a semantically consistent and syntactically variational encoder-decoder framework, which uses adversarial learning to ensure the syntactic latent variable be semantic-free.

Paraphrase Generation Semantic Similarity +3

Show, Attend and Translate: Unpaired Multi-Domain Image-to-Image Translation with Visual Attention

no code implementations19 Nov 2018 Honglun Zhang, Wenqing Chen, Jidong Tian, Yongkun Wang, Yaohui Jin

Recently unpaired multi-domain image-to-image translation has attracted great interests and obtained remarkable progress, where a label vector is utilized to indicate multi-domain information.

Attribute Generative Adversarial Network +2

Cannot find the paper you are looking for? You can Submit a new open access paper.