Search Results for author: Hanqi Yan

Found 11 papers, 6 papers with code

Addressing Order Sensitivity of In-Context Demonstration Examples in Causal Language Models

no code implementations23 Feb 2024 Yanzheng Xiang, Hanqi Yan, Lin Gui, Yulan He

This approach utilizes contrastive learning to align representations of in-context examples across different positions and introduces a consistency loss to ensure similar representations for inputs with different permutations.

Attribute Contrastive Learning +1

Counterfactual Generation with Identifiability Guarantees

1 code implementation NeurIPS 2023 Hanqi Yan, Lingjing Kong, Lin Gui, Yuejie Chi, Eric Xing, Yulan He, Kun Zhang

In this work, we tackle the domain-varying dependence between the content and the style variables inherent in the counterfactual generation task.

counterfactual Style Transfer +1

Mirror: A Multiple-perspective Self-Reflection Method for Knowledge-rich Reasoning

no code implementations22 Feb 2024 Hanqi Yan, Qinglin Zhu, Xinyu Wang, Lin Gui, Yulan He

While Large language models (LLMs) have the capability to iteratively reflect on their own outputs, recent studies have observed their struggles with knowledge-rich problems without access to external resources.

The Mystery of In-Context Learning: A Comprehensive Survey on Interpretation and Analysis

no code implementations1 Nov 2023 Yuxiang Zhou, Jiazheng Li, Yanzheng Xiang, Hanqi Yan, Lin Gui, Yulan He

Understanding in-context learning (ICL) capability that enables large language models (LLMs) to excel in proficiency through demonstration examples is of utmost importance.

In-Context Learning

Explainable Recommender with Geometric Information Bottleneck

no code implementations9 May 2023 Hanqi Yan, Lin Gui, Menghan Wang, Kun Zhang, Yulan He

Explainable recommender systems can explain their recommendation decisions, enhancing user trust in the systems.

Explanation Generation Recommendation Systems

Distinguishability Calibration to In-Context Learning

1 code implementation13 Feb 2023 Hongjing Li, Hanqi Yan, Yanran Li, Li Qian, Yulan He, Lin Gui

When using prompt-based learning for text classification, the goal is to use a pre-trained language model (PLM) to predict a missing token in a pre-defined template given an input text, which can be mapped to a class label.

In-Context Learning Language Modelling +3

Tracking Brand-Associated Polarity-Bearing Topics in User Reviews

1 code implementation3 Jan 2023 Runcong Zhao, Lin Gui, Hanqi Yan, Yulan He

Monitoring online customer reviews is important for business organisations to measure customer satisfaction and better manage their reputations.

Meta-Learning

Addressing Token Uniformity in Transformers via Singular Value Transformation

1 code implementation24 Aug 2022 Hanqi Yan, Lin Gui, Wenjie Li, Yulan He

In this paper, we propose to use the distribution of singular values of outputs of each transformer layer to characterise the phenomenon of token uniformity and empirically illustrate that a less skewed singular value distribution can alleviate the `token uniformity' problem.

Semantic Textual Similarity

Hierarchical Interpretation of Neural Text Classification

1 code implementation20 Feb 2022 Hanqi Yan, Lin Gui, Yulan He

Neural models developed in NLP however often compose word semantics in a hierarchical manner and text classification requires hierarchical modelling to aggregate local information in order to deal with topic and label shifts more effectively.

text-classification Text Classification

Position Bias Mitigation: A Knowledge-Aware Graph Model for Emotion Cause Extraction

1 code implementation ACL 2021 Hanqi Yan, Lin Gui, Gabriele Pergola, Yulan He

To investigate the degree of reliance of existing ECE models on clause relative positions, we propose a novel strategy to generate adversarial examples in which the relative position information is no longer the indicative feature of cause clauses.

Emotion Cause Extraction Position

Cannot find the paper you are looking for? You can Submit a new open access paper.