Search Results for author: Yuankai Zhang

Found 5 papers, 4 papers with code

Enhancing the Rationale-Input Alignment for Self-explaining Rationalization

no code implementations7 Dec 2023 Wei Liu, Haozhao Wang, Jun Wang, Zhiying Deng, Yuankai Zhang, Cheng Wang, Ruixuan Li

Rationalization empowers deep learning models with self-explaining capabilities through a cooperative game, where a generator selects a semantically consistent subset of the input as a rationale, and a subsequent predictor makes predictions based on the selected rationale.

D-Separation for Causal Self-Explanation

1 code implementation NeurIPS 2023 Wei Liu, Jun Wang, Haozhao Wang, Ruixuan Li, Zhiying Deng, Yuankai Zhang, Yang Qiu

Instead of attempting to rectify the issues of the MMI criterion, we propose a novel criterion to uncover the causal rationale, termed the Minimum Conditional Dependence (MCD) criterion, which is grounded on our finding that the non-causal features and the target label are \emph{d-separated} by the causal rationale.

Decoupled Rationalization with Asymmetric Learning Rates: A Flexible Lipschitz Restraint

1 code implementation23 May 2023 Wei Liu, Jun Wang, Haozhao Wang, Ruixuan Li, Yang Qiu, Yuankai Zhang, Jie Han, Yixiong Zou

However, such a cooperative game may incur the degeneration problem where the predictor overfits to the uninformative pieces generated by a not yet well-trained generator and in turn, leads the generator to converge to a sub-optimal model that tends to select senseless pieces.

MGR: Multi-generator Based Rationalization

1 code implementation8 May 2023 Wei Liu, Haozhao Wang, Jun Wang, Ruixuan Li, Xinyang Li, Yuankai Zhang, Yang Qiu

Rationalization is to employ a generator and a predictor to construct a self-explaining NLP model in which the generator selects a subset of human-intelligible pieces of the input text to the following predictor.

FR: Folded Rationalization with a Unified Encoder

1 code implementation17 Sep 2022 Wei Liu, Haozhao Wang, Jun Wang, Ruixuan Li, Chao Yue, Yuankai Zhang

Conventional works generally employ a two-phase model in which a generator selects the most important pieces, followed by a predictor that makes predictions based on the selected pieces.

Cannot find the paper you are looking for? You can Submit a new open access paper.