no code implementations • 5 Mar 2024 • Congzhi Zhang, Linhai Zhang, Deyu Zhou, Guoqiang Xu
In specific, causal intervention is implemented by designing the prompts without accessing the parameters and logits of LLMs. The chain-of-thoughts generated by LLMs are employed as the mediator variable and the causal effect between the input prompt and the output answers is calculated through front-door adjustment to mitigate model biases.
1 code implementation • 5 Mar 2024 • Congzhi Zhang, Linhai Zhang, Deyu Zhou
Conventional multi-hop fact verification models are prone to rely on spurious correlations from the annotation artifacts, leading to an obvious performance decline on unbiased datasets.