Search Results for author: Xuanjing Huang

Found 336 papers, 164 papers with code

Towards Adversarially Robust Text Classifiers by Learning to Reweight Clean Examples

no code implementations Findings (ACL) 2022 Jianhan Xu, Cenyuan Zhang, Xiaoqing Zheng, Linyang Li, Cho-Jui Hsieh, Kai-Wei Chang, Xuanjing Huang

Most of the existing defense methods improve the adversarial robustness by making the models adapt to the training set augmented with some adversarial examples.

Adversarial Robustness

Making Parameter-efficient Tuning More Efficient: A Unified Framework for Classification Tasks

1 code implementation COLING 2022 Xin Zhou, Ruotian Ma, Yicheng Zou, Xuanting Chen, Tao Gui, Qi Zhang, Xuanjing Huang, Rui Xie, Wei Wu

Specifically, we re-formulate both token and sentence classification tasks into a unified language modeling task, and map label spaces of different tasks into the same vocabulary space.

Language Modeling Language Modelling +3

A Progressive Framework for Role-Aware Rumor Resolution

1 code implementation COLING 2022 Lei Chen, Guanying Li, Zhongyu Wei, Yang Yang, Baohua Zhou, Qi Zhang, Xuanjing Huang

Existing works on rumor resolution have shown great potential in recognizing word appearance and user participation.

PlugAT: A Plug and Play Module to Defend against Textual Adversarial Attack

no code implementations COLING 2022 Rui Zheng, Rong Bao, Qin Liu, Tao Gui, Qi Zhang, Xuanjing Huang, Rui Xie, Wei Wu

To reduce the potential side effects of using defense modules, we further propose a novel forgetting restricted adversarial training, which filters out bad adversarial examples that impair the performance of original ones.

Adversarial Attack Domain Adaptation +2

A Structure-Aware Argument Encoder for Literature Discourse Analysis

1 code implementation COLING 2022 Yinzi Li, Wei Chen, Zhongyu Wei, Yujun Huang, Chujun Wang, Siyuan Wang, Qi Zhang, Xuanjing Huang, Libo Wu

Existing research for argument representation learning mainly treats tokens in the sentence equally and ignores the implied structure information of argumentative context.

Position Representation Learning +1

Multi-task Learning with Gradient Communication

no code implementations ICLR 2019 Pengfei Liu, Xuanjing Huang

In this paper, we describe a general framework to systematically analyze current neural models for multi-task learning, in which we find that existing models expect to disentangle features into different spaces while features learned in practice are still entangled in shared space, leaving potential hazards for other training or unseen tasks.

Inductive Bias Multi-Task Learning

CQG: A Simple and Effective Controlled Generation Framework for Multi-hop Question Generation

1 code implementation ACL 2022 Zichu Fei, Qi Zhang, Tao Gui, Di Liang, Sirui Wang, Wei Wu, Xuanjing Huang

CQG employs a simple method to generate the multi-hop questions that contain key entities in multi-hop reasoning chains, which ensure the complexity and quality of the questions.

Decoder Question Generation +1

LFKQG: A Controlled Generation Framework with Local Fine-tuning for Question Generation over Knowledge Bases

no code implementations COLING 2022 Zichu Fei, Xin Zhou, Tao Gui, Qi Zhang, Xuanjing Huang

Existing KBQG models still face two main challenges: (1) Most models often focus on the most relevant part of the answer entity, while neglecting the rest of the subgraph.

Natural Questions Question Generation +1

Effective Length Extrapolation via Dimension-Wise Positional Embeddings Manipulation

no code implementations26 Apr 2025 Yi Lu, Wanxu Zhao, Xin Zhou, Chenxin An, Chenglong Wang, Shuo Li, Yuming Yang, Jun Zhao, Tao Ji, Tao Gui, Qi Zhang, Xuanjing Huang

Instead of manipulating all dimensions equally, DPE detects the effective length for every dimension and finds the key dimensions for context extension.

8k Position

Improving RL Exploration for LLM Reasoning through Retrospective Replay

no code implementations19 Apr 2025 Shihan Dou, Muling Wu, Jingwen Xu, Rui Zheng, Tao Gui, Qi Zhang, Xuanjing Huang

We observe that for complex problems, during the early stages of training, the model exhibits strong exploratory capabilities and can identify promising solution ideas.

Code Generation Mathematical Reasoning +1

Revisiting LLM Evaluation through Mechanism Interpretability: a New Metric and Model Utility Law

1 code implementation10 Apr 2025 Yixin Cao, Jiahao Ying, Yaoning Wang, Xipeng Qiu, Xuanjing Huang, Yugang Jiang

In this paper, we analyze the core limitations of traditional evaluation pipelines and propose a novel metric, the Model Utilization Index (MUI), which introduces mechanism interpretability techniques to complement traditional performance metrics.

Fairness

FamilyTool: A Multi-hop Personalized Tool Use Benchmark

1 code implementation9 Apr 2025 Yuxin Wang, Yiran Guo, Yining Zheng, Zhangyue Yin, Shuo Chen, Jie Yang, Jiajun Chen, Xuanjing Huang, Xipeng Qiu

To bridge this gap, we introduce FamilyTool, a novel benchmark grounded in a family-based knowledge graph (KG) that simulates personalized, multi-hop tool use scenarios.

Mitigating Object Hallucinations in MLLMs via Multi-Frequency Perturbations

1 code implementation19 Mar 2025 Shuo Li, Jiajun Sun, Guodong Zheng, Xiaoran Fan, Yujiong Shen, Yi Lu, Zhiheng Xi, Yuming Yang, Wenming Tan, Tao Ji, Tao Gui, Qi Zhang, Xuanjing Huang

Recently, multimodal large language models (MLLMs) have demonstrated remarkable performance in visual-language tasks.

Explainable Synthetic Image Detection through Diffusion Timestep Ensembling

no code implementations8 Mar 2025 Yixin Wu, Feiran Zhang, Tianyuan Shi, Ruicheng Yin, Zhenghua Wang, Zhenliang Gan, Xiaohua Wang, Changze Lv, Xiaoqing Zheng, Xuanjing Huang

Based on this observation, we propose a novel detection method that amplifies these differences by progressively adding noise to the original images across multiple timesteps, and train an ensemble of classifiers on these noised images.

Denoising Explanation Generation +1

Better Process Supervision with Bi-directional Rewarding Signals

no code implementations6 Mar 2025 Wenxiang Chen, wei he, Zhiheng Xi, Honglin Guo, Boyang Hong, Jiazheng Zhang, Rui Zheng, Nijun Li, Tao Gui, Yun Li, Qi Zhang, Xuanjing Huang

Process supervision, i. e., evaluating each step, is critical for complex large language model (LLM) reasoning and test-time searching with increased inference compute.

Language Modeling Language Modelling +3

Layer-Specific Scaling of Positional Encodings for Superior Long-Context Modeling

no code implementations6 Mar 2025 Zhenghua Wang, Yiran Ding, Changze Lv, Zhibo Xu, Tianlong Li, Tianyuan Shi, Xiaoqing Zheng, Xuanjing Huang

Although large language models (LLMs) have achieved significant progress in handling long-context inputs, they still suffer from the ``lost-in-the-middle'' problem, where crucial information in the middle of the context is often underrepresented or lost.

Prior-Fitted Networks Scale to Larger Datasets When Treated as Weak Learners

1 code implementation3 Mar 2025 Yuxin Wang, Botian Jiang, Yiran Guo, Quan Gan, David Wipf, Xuanjing Huang, Xipeng Qiu

Through this work, we address the challenges of efficiently handling large datasets via PFN-based models, paving the way for faster and more effective tabular data classification training and prediction process.

AutoML tabular-classification

Thus Spake Long-Context Large Language Model

no code implementations24 Feb 2025 Xiaoran Liu, Ruixiao Li, Mianqiu Huang, Zhigeng Liu, Yuerong Song, Qipeng Guo, Siyang He, Qiqi Wang, Linlin Li, Qun Liu, Yaqian Zhou, Xuanjing Huang, Xipeng Qiu

Moreover, the research on long-context LLMs has expanded from length extrapolation to a comprehensive focus on architecture, infrastructure, training, and evaluation technologies.

Language Modeling Language Modelling +4

AutoLogi: Automated Generation of Logic Puzzles for Evaluating Reasoning Abilities of Large Language Models

1 code implementation24 Feb 2025 Qin Zhu, Fei Huang, Runyu Peng, Keming Lu, Bowen Yu, Qinyuan Cheng, Xipeng Qiu, Xuanjing Huang, Junyang Lin

While logical reasoning evaluation of Large Language Models (LLMs) has attracted significant attention, existing benchmarks predominantly rely on multiple-choice formats that are vulnerable to random guessing, leading to overestimated performance and substantial performance fluctuations.

Logical Reasoning Multiple-choice

Measuring Data Diversity for Instruction Tuning: A Systematic Analysis and A Reliable Metric

1 code implementation24 Feb 2025 Yuming Yang, Yang Nan, Junjie Ye, Shihan Dou, Xiao Wang, Shuo Li, Huijie Lv, Tao Gui, Qi Zhang, Xuanjing Huang

To address this, we systematically analyze 11 existing diversity measurement methods by assessing their correlation with model performance through extensive fine-tuning experiments.

Diversity

How Jailbreak Defenses Work and Ensemble? A Mechanistic Investigation

no code implementations20 Feb 2025 Zhuohang Long, Siyuan Wang, Shujun Liu, Yuhang Lai, Xuanjing Huang, Zhongyu Wei

Jailbreak attacks, where harmful prompts bypass generative models' built-in safety, raise serious concerns about model vulnerability.

Binary Classification

Self-Consistency of the Internal Reward Models Improves Self-Rewarding Language Models

no code implementations13 Feb 2025 Xin Zhou, Yiwen Guo, Ruotian Ma, Tao Gui, Qi Zhang, Xuanjing Huang

Recent advancements in Self-Rewarding Language Models suggest that an LLM can use its internal reward models (such as LLM-as-a-Judge) \cite{yuanself} to generate preference data, improving alignment performance without costly human annotation.

Multi-Agent Simulator Drives Language Models for Legal Intensive Interaction

no code implementations8 Feb 2025 Shengbin Yue, Ting Huang, Zheng Jia, Siyuan Wang, Shujun Liu, Yun Song, Xuanjing Huang, Zhongyu Wei

Large Language Models (LLMs) have significantly advanced legal intelligence, but the scarcity of scenario data impedes the progress toward interactive legal scenarios.

Toward Relative Positional Encoding in Spiking Transformers

no code implementations28 Jan 2025 Changze Lv, Yansen Wang, Dongqi Han, Yifei Shen, Xiaoqing Zheng, Xuanjing Huang, Dongsheng Li

We provide comprehensive proof of the method's effectiveness in partially capturing relative positional information for sequential tasks.

Image Classification text-classification +2

Error Classification of Large Language Models on Math Word Problems: A Dynamically Adaptive Framework

no code implementations26 Jan 2025 Yuhong Sun, Zhangyue Yin, Xuanjing Huang, Xipeng Qiu, Hui Zhao

To reduce human bias and enable fine-grained analysis of error patterns, we propose a novel framework for automated dynamic error classification in mathematical reasoning.

Math Mathematical Reasoning

Human-like conceptual representations emerge from language prediction

no code implementations21 Jan 2025 Ningyu Xu, Qi Zhang, Chao Du, Qiang Luo, Xipeng Qiu, Xuanjing Huang, Menghan Zhang

These findings establish that structured, human-like conceptual representations can naturally emerge from language prediction without real-world grounding.

Prediction Reverse Dictionary

Dendritic Localized Learning: Toward Biologically Plausible Algorithm

no code implementations17 Jan 2025 Changze Lv, Jingwen Xu, Yiyang Lu, Xiaohua Wang, Zhenghua Wang, Zhibo Xu, Di Yu, Xin Du, Xiaoqing Zheng, Xuanjing Huang

Backpropagation is the foundational algorithm for training neural networks and a key driver of deep learning's success.

ToolHop: A Query-Driven Benchmark for Evaluating Large Language Models in Multi-Hop Tool Use

no code implementations5 Jan 2025 Junjie Ye, Zhengyin Du, Xuesong Yao, Weijian Lin, Yufei Xu, Zehui Chen, Zaiyuan Wang, Sining Zhu, Zhiheng Xi, Siyu Yuan, Tao Gui, Qi Zhang, Xuanjing Huang, Jiecao Chen

Effective evaluation of multi-hop tool use is critical for analyzing the understanding, reasoning, and function-calling capabilities of large language models (LLMs).

Code Generation

TL-Training: A Task-Feature-Based Framework for Training Large Language Models in Tool Use

1 code implementation20 Dec 2024 Junjie Ye, Yilong Wu, Sixian Li, Yuming Yang, Tao Gui, Qi Zhang, Xuanjing Huang, Peng Wang, Zhongchao shi, Jianping Fan, Zhengyin Du

Large language models (LLMs) achieve remarkable advancements by leveraging tools to interact with external environments, a critical step toward generalized AI.

Scaling of Search and Learning: A Roadmap to Reproduce o1 from Reinforcement Learning Perspective

1 code implementation18 Dec 2024 Zhiyuan Zeng, Qinyuan Cheng, Zhangyue Yin, Bo wang, ShiMin Li, Yunhua Zhou, Qipeng Guo, Xuanjing Huang, Xipeng Qiu

OpenAI o1 represents a significant milestone in Artificial Inteiligence, which achieves expert-level performances on many challanging tasks that require strong reasoning ability. OpenAI has claimed that the main techinique behinds o1 is the reinforcement learining.

Knowledge Distillation

EvoWiki: Evaluating LLMs on Evolving Knowledge

no code implementations18 Dec 2024 Wei Tang, Yixin Cao, Yang Deng, Jiahao Ying, Bo wang, Yizhe Yang, Yuyue Zhao, Qi Zhang, Xuanjing Huang, Yugang Jiang, Yong Liao

Knowledge utilization is a critical aspect of LLMs, and understanding how they adapt to evolving knowledge is essential for their effective deployment.

RAG

COSEE: Consistency-Oriented Signal-Based Early Exiting via Calibrated Sample Weighting Mechanism

1 code implementation17 Dec 2024 Jianing He, Qi Zhang, Hongyun Zhang, Xuanjing Huang, Usman Naseem, Duoqian Miao

Early exiting is an effective paradigm for improving the inference efficiency of pre-trained language models (PLMs) by dynamically adjusting the number of executed layers for each sample.

Activating Distributed Visual Region within LLMs for Efficient and Effective Vision-Language Training and Inference

no code implementations17 Dec 2024 Siyuan Wang, Dianyi Wang, Chengxing Zhou, Zejun Li, Zhihao Fan, Xuanjing Huang, Zhongyu Wei

Large Vision-Language Models (LVLMs) typically learn visual capacity through visual instruction tuning, involving updates to both a projector and their LLM backbones.

From Individual to Society: A Survey on Social Simulation Driven by Large Language Model-based Agents

1 code implementation4 Dec 2024 Xinyi Mou, Xuanwen Ding, Qi He, Liang Wang, Jingcong Liang, Xinnong Zhang, Libo Sun, Jiayu Lin, Jie zhou, Xuanjing Huang, Zhongyu Wei

We categorize the simulations into three types: (1) Individual Simulation, which mimics specific individuals or demographic groups; (2) Scenario Simulation, where multiple agents collaborate to achieve goals within specific contexts; and (3) Society Simulation, which models interactions within agent societies to reflect the complexity and variety of real-world dynamics.

Language Modeling Language Modelling +1

VLSBench: Unveiling Visual Leakage in Multimodal Safety

no code implementations29 Nov 2024 Xuhao Hu, Dongrui Liu, Hao Li, Xuanjing Huang, Jing Shao

To explain such a counter-intuitive phenomenon, we discover a visual safety information leakage (VSIL) problem in existing multimodal safety benchmarks, i. e., the potentially risky and sensitive content in the image has been revealed in the textual query.

ObjectRelator: Enabling Cross-View Object Relation Understanding in Ego-Centric and Exo-Centric Videos

no code implementations28 Nov 2024 Yuqian Fu, Runze Wang, Yanwei Fu, Danda Pani Paudel, Xuanjing Huang, Luc van Gool

In this paper, we focus on the Ego-Exo Object Correspondence task, an emerging challenge in the field of computer vision that aims to map objects across ego-centric and exo-centric views.

Object Object Localization +1

PIORS: Personalized Intelligent Outpatient Reception based on Large Language Model with Multi-Agents Medical Scenario Simulation

1 code implementation21 Nov 2024 Zhijie Bao, Qingyun Liu, Ying Guo, Zhengqiang Ye, Jun Shen, Shirong Xie, Jiajie Peng, Xuanjing Huang, Zhongyu Wei

This system integrates an LLM-based reception nurse and a collaboration between LLM and hospital information system (HIS) into real outpatient reception setting, aiming to deliver personalized, high-quality, and efficient reception services.

Language Modeling Language Modelling +1

LongSafetyBench: Long-Context LLMs Struggle with Safety Issues

no code implementations11 Nov 2024 Mianqiu Huang, Xiaoran Liu, Shaojun Zhou, Mozhi Zhang, Chenkun Tan, Pengyu Wang, Qipeng Guo, Zhe Xu, Linyang Li, Zhikai Lei, Linlin Li, Qun Liu, Yaqian Zhou, Xipeng Qiu, Xuanjing Huang

With the development of large language models (LLMs), the sequence length of these models continues to increase, drawing significant attention to long-context language models.

Mitigating Tail Narrowing in LLM Self-Improvement via Socratic-Guided Sampling

1 code implementation1 Nov 2024 Yiwen Ding, Zhiheng Xi, wei he, Zhuoyuan Li, Yitao Zhai, Xiaowei Shi, Xunliang Cai, Tao Gui, Qi Zhang, Xuanjing Huang

Self-improvement methods enable large language models (LLMs) to generate solutions themselves and iteratively train on filtered, high-quality rationales.

Multi-Programming Language Sandbox for LLMs

1 code implementation30 Oct 2024 Shihan Dou, Jiazheng Zhang, Jianxiang Zang, Yunbo Tao, Weikang Zhou, Haoxiang Jia, Shichun Liu, Yuming Yang, Zhiheng Xi, Shenxi Wu, Shaoqing Zhang, Muling Wu, Changze Lv, Limao Xiong, WenYu Zhan, Lin Zhang, Rongxiang Weng, Jingang Wang, Xunliang Cai, Yueming Wu, Ming Wen, Rui Zheng, Tao Ji, Yixin Cao, Tao Gui, Xipeng Qiu, Qi Zhang, Xuanjing Huang

We introduce MPLSandbox, an out-of-the-box multi-programming language sandbox designed to provide unified and comprehensive feedback from compiler and analysis tools for Large Language Models (LLMs).

Distill Visual Chart Reasoning Ability from LLMs to MLLMs

2 code implementations24 Oct 2024 wei he, Zhiheng Xi, Wanxu Zhao, Xiaoran Fan, Yiwen Ding, Zifei Shan, Tao Gui, Qi Zhang, Xuanjing Huang

Specifically, we employ text-based synthesizing techniques to construct chart-plotting code and produce ReachQA, a dataset containing 3k reasoning-intensive charts and 20k Q&A pairs to enhance both recognition and reasoning abilities.

Multimodal Reasoning Visual Reasoning

Unveiling and Consulting Core Experts in Retrieval-Augmented MoE-based LLMs

no code implementations20 Oct 2024 Xin Zhou, Ping Nie, Yiwen Guo, Haojie Wei, Zhanqiu Zhang, Pasquale Minervini, Ruotian Ma, Tao Gui, Qi Zhang, Xuanjing Huang

In this paper, we aim to investigate these internal mechanisms within the popular Mixture-of-Expert (MoE)-based LLMs and demonstrate how to improve RAG by examining expert activations in these LLMs.

RAG Retrieval

Have the VLMs Lost Confidence? A Study of Sycophancy in VLMs

no code implementations15 Oct 2024 Shuo Li, Tao Ji, Xiaoran Fan, Linsheng Lu, Leyi Yang, Yuming Yang, Zhiheng Xi, Rui Zheng, Yuran Wang, Xiaohui Zhao, Tao Gui, Qi Zhang, Xuanjing Huang

Our findings indicate that the ability to prevent sycophancy is predominantly observed in higher layers of the model.

Hallucination

RMB: Comprehensively Benchmarking Reward Models in LLM Alignment

1 code implementation13 Oct 2024 Enyu Zhou, Guodong Zheng, Binghai Wang, Zhiheng Xi, Shihan Dou, Rong Bao, Wei Shen, Limao Xiong, Jessica Fan, Yurong Mou, Rui Zheng, Tao Gui, Qi Zhang, Xuanjing Huang

However, the current evaluation of RMs may not directly correspond to their alignment performance due to the limited distribution of evaluation data and evaluation methods that are not closely related to alignment objectives.

Benchmarking

Tell Me What You Don't Know: Enhancing Refusal Capabilities of Role-Playing Agents via Representation Space Analysis and Editing

no code implementations25 Sep 2024 Wenhao Liu, Siyu An, Junru Lu, Muling Wu, Tianlong Li, Xiaohua Wang, Xiaoqing Zheng, Di Yin, Xing Sun, Xuanjing Huang

To investigate RPAs' performance when faced with different types of conflicting requests, we develop an evaluation benchmark that includes contextual knowledge conflicting requests, parametric knowledge conflicting requests, and non-conflicting requests to assess RPAs' ability to identify conflicts and refuse to answer appropriately without over-refusing.

60 Data Points are Sufficient to Fine-Tune LLMs for Question-Answering

no code implementations24 Sep 2024 Junjie Ye, Yuming Yang, Qi Zhang, Tao Gui, Xuanjing Huang, Peng Wang, Zhongchao shi, Jianping Fan

Large language models (LLMs) encode extensive world knowledge through pre-training on massive datasets, which can then be fine-tuned for the question-answering (QA) task.

Question Answering World Knowledge

DetectiveQA: Evaluating Long-Context Reasoning on Detective Novels

no code implementations4 Sep 2024 Zhe Xu, Jiasheng Ye, Xiangyang Liu, Tianxiang Sun, Xiaoran Liu, Qipeng Guo, Linlin Li, Qun Liu, Xuanjing Huang, Xipeng Qiu

DetectiveQA focuses on evaluating the long-context reasoning ability of LLMs, which not only requires a full understanding of context but also requires extracting important evidences from the context and reasoning according to extracted evidences to answer the given questions.

Inverse-Q*: Token Level Reinforcement Learning for Aligning Large Language Models Without Preference Data

no code implementations27 Aug 2024 Han Xia, Songyang Gao, Qiming Ge, Zhiheng Xi, Qi Zhang, Xuanjing Huang

Reinforcement Learning from Human Feedback (RLHF) has proven effective in aligning large language models with human intentions, yet it often relies on complex methodologies like Proximal Policy Optimization (PPO) that require extensive hyper-parameter tuning and present challenges in sample efficiency and stability.

reinforcement-learning Reinforcement Learning

Identity-Driven Hierarchical Role-Playing Agents

no code implementations28 Jul 2024 Libo Sun, Siyuan Wang, Xuanjing Huang, Zhongyu Wei

Utilizing large language models (LLMs) to achieve role-playing has gained great attention recently.

Case2Code: Learning Inductive Reasoning with Synthetic Data

1 code implementation17 Jul 2024 Yunfan Shao, Linyang Li, Yichuan Ma, Peiji Li, Demin Song, Qinyuan Cheng, ShiMin Li, Xiaonan Li, Pengyu Wang, Qipeng Guo, Hang Yan, Xipeng Qiu, Xuanjing Huang, Dahua Lin

In this paper, we hope to focus on evaluating and teaching LLMs to conduct inductive reasoning, that is, LLMs are supposed to infer underlying rules by observing examples or sequential transformations.

Synergistic Multi-Agent Framework with Trajectory Learning for Knowledge-Intensive Tasks

1 code implementation13 Jul 2024 Shengbin Yue, Siyuan Wang, Wei Chen, Xuanjing Huang, Zhongyu Wei

Recent advancements in Large Language Models (LLMs) have led to significant breakthroughs in various natural language processing tasks.

Hallucination Navigate

HAF-RM: A Hybrid Alignment Framework for Reward Model Training

no code implementations4 Jul 2024 Shujun Liu, Xiaoyu Shen, Yuhang Lai, Siyuan Wang, Shengbin Yue, Zengfeng Huang, Xuanjing Huang, Zhongyu Wei

By decoupling the reward modeling procedure and incorporating hybrid supervision, our HaF-RM framework offers a principled and effective approach to enhancing the performance and alignment of reward models, a critical component in the responsible development of powerful language models.

Searching for Best Practices in Retrieval-Augmented Generation

1 code implementation1 Jul 2024 Xiaohua Wang, Zhenghua Wang, Xuan Gao, Feiran Zhang, Yixin Wu, Zhibo Xu, Tianyuan Shi, Zhengyuan Wang, Shizheng Li, Qi Qian, Ruicheng Yin, Changze Lv, Xiaoqing Zheng, Xuanjing Huang

Retrieval-augmented generation (RAG) techniques have proven to be effective in integrating up-to-date information, mitigating hallucinations, and enhancing response quality, particularly in specialized domains.

Question Answering RAG +1

LLMs-as-Instructors: Learning from Errors Toward Automating Model Improvement

no code implementations29 Jun 2024 Jiahao Ying, Mingbao Lin, Yixin Cao, Wei Tang, Bo wang, Qianru Sun, Xuanjing Huang, Shuicheng Yan

Inspired by the theory of "Learning from Errors", this framework employs an instructor LLM to meticulously analyze the specific errors within a target model, facilitating targeted and efficient training cycles.

Contrastive Learning Mathematical Reasoning

SafeAligner: Safety Alignment against Jailbreak Attacks via Response Disparity Guidance

1 code implementation26 Jun 2024 Caishuang Huang, Wanxu Zhao, Rui Zheng, Huijie Lv, WenYu Zhan, Shihan Dou, Sixian Li, Xiao Wang, Enyu Zhou, Junjie Ye, Yuming Yang, Tao Gui, Qi Zhang, Xuanjing Huang

As the development of large language models (LLMs) rapidly advances, securing these models effectively without compromising their utility has become a pivotal area of research.

Safety Alignment

Scaling Laws for Fact Memorization of Large Language Models

1 code implementation22 Jun 2024 Xingyu Lu, Xiaonan Li, Qinyuan Cheng, Kai Ding, Xuanjing Huang, Xipeng Qiu

We find that LLMs' fact knowledge capacity has a linear and negative exponential law relationship with model size and training epochs, respectively.

Memorization

Cross-Modality Safety Alignment

1 code implementation21 Jun 2024 Siyin Wang, Xingsong Ye, Qinyuan Cheng, Junwen Duan, ShiMin Li, Jinlan Fu, Xipeng Qiu, Xuanjing Huang

As Artificial General Intelligence (AGI) becomes increasingly integrated into various facets of human life, ensuring the safety and ethical alignment of such systems is paramount.

Safety Alignment

Inference-Time Decontamination: Reusing Leaked Benchmarks for Large Language Model Evaluation

1 code implementation20 Jun 2024 Qin Zhu, Qingyuan Cheng, Runyu Peng, Xiaonan Li, Tengxiao Liu, Ru Peng, Xipeng Qiu, Xuanjing Huang

On MMLU, using Inference-time Decontamination can lead to a decrease in the results of Phi3 and Mistral by 6. 7% and 3. 6% respectively.

GSM8K Language Model Evaluation +4

Promoting Data and Model Privacy in Federated Learning through Quantized LoRA

no code implementations16 Jun 2024 Jianhao Zhu, Changze Lv, Xiaohua Wang, Muling Wu, Wenhao Liu, Tianlong Li, Zixuan Ling, Cenyuan Zhang, Xiaoqing Zheng, Xuanjing Huang

Conventional federated learning primarily aims to secure the privacy of data distributed across multiple edge devices, with the global model dispatched to edge devices for parameter updates during the learning process.

Federated Learning parameter-efficient fine-tuning +1

Toward Optimal LLM Alignments Using Two-Player Games

1 code implementation16 Jun 2024 Rui Zheng, Hongyi Guo, Zhihan Liu, Xiaoying Zhang, Yuanshun Yao, Xiaojun Xu, Zhaoran Wang, Zhiheng Xi, Tao Gui, Qi Zhang, Xuanjing Huang, Hang Li, Yang Liu

We theoretically demonstrate that this iterative reinforcement learning optimization converges to a Nash Equilibrium for the game induced by the agents.

reinforcement-learning Reinforcement Learning

EmbSpatial-Bench: Benchmarking Spatial Understanding for Embodied Tasks with Large Vision-Language Models

1 code implementation9 Jun 2024 Mengfei Du, Binhao Wu, Zejun Li, Xuanjing Huang, Zhongyu Wei

The recent rapid development of Large Vision-Language Models (LVLMs) has indicated their potential for embodied tasks. However, the critical skill of spatial understanding in embodied environments has not been thoroughly evaluated, leaving the gap between current LVLMs and qualified embodied intelligence unknown.

Benchmarking

Advancing Spiking Neural Networks for Sequential Modeling with Central Pattern Generators

1 code implementation23 May 2024 Changze Lv, Dongqi Han, Yansen Wang, Xiaoqing Zheng, Xuanjing Huang, Dongsheng Li

Spiking neural networks (SNNs) represent a promising approach to developing artificial neural networks that are both energy-efficient and biologically plausible.

Image Classification text-classification +3

Aggregation of Reasoning: A Hierarchical Framework for Enhancing Answer Selection in Large Language Models

1 code implementation21 May 2024 Zhangyue Yin, Qiushi Sun, Qipeng Guo, Zhiyuan Zeng, Xiaonan Li, Tianxiang Sun, Cheng Chang, Qinyuan Cheng, Ding Wang, Xiaofeng Mou, Xipeng Qiu, Xuanjing Huang

Recent advancements in Chain-of-Thought prompting have facilitated significant breakthroughs for Large Language Models (LLMs) in complex reasoning tasks.

Answer Selection

Infer Induced Sentiment of Comment Response to Video: A New Task, Dataset and Baseline

no code implementations15 May 2024 Qi Jia, Baoyu Fan, Cong Xu, Lu Liu, Liang Jin, Guoguang Du, Zhenhua Guo, YaQian Zhao, Xuanjing Huang, RenGang Li

In light of this, we introduces a novel research task, Multi-modal Sentiment Analysis for Comment Response of Video Induced(MSA-CRVI), aims to inferring opinions and emotions according to the comments response to micro video.

Sentiment Analysis

Exploring the Compositional Deficiency of Large Language Models in Mathematical Reasoning

1 code implementation5 May 2024 Jun Zhao, Jingqi Tong, Yurong Mou, Ming Zhang, Qi Zhang, Xuanjing Huang

In this work, we investigate the compositionality of large language models (LLMs) in mathematical reasoning.

GSM8K Math +1

DELAN: Dual-Level Alignment for Vision-and-Language Navigation by Cross-Modal Contrastive Learning

1 code implementation2 Apr 2024 Mengfei Du, Binhao Wu, Jiwen Zhang, Zhihao Fan, Zejun Li, Ruipu Luo, Xuanjing Huang, Zhongyu Wei

For task completion, the agent needs to align and integrate various navigation modalities, including instruction, observation and navigation history.

Contrastive Learning Decision Making +2

Self-Demos: Eliciting Out-of-Demonstration Generalizability in Large Language Models

1 code implementation1 Apr 2024 wei he, Shichun Liu, Jun Zhao, Yiwen Ding, Yi Lu, Zhiheng Xi, Tao Gui, Qi Zhang, Xuanjing Huang

The generated demos strategically interpolate between existing demos and the given query, transforming the query from OOD to ID.

In-Context Learning Math

Subspace Defense: Discarding Adversarial Perturbations by Learning a Subspace for Clean Signals

no code implementations24 Mar 2024 Rui Zheng, Yuhao Zhou, Zhiheng Xi, Tao Gui, Qi Zhang, Xuanjing Huang

We first empirically show that the features of either clean signals or adversarial perturbations are redundant and span in low-dimensional linear subspaces respectively with minimal overlap, and the classical low-dimensional subspace projection can suppress perturbation features out of the subspace of clean signals.

Adversarial Defense

Debatrix: Multi-dimensional Debate Judge with Iterative Chronological Analysis Based on LLM

2 code implementations12 Mar 2024 Jingcong Liang, Rong Ye, Meng Han, Ruofei Lai, Xinyu Zhang, Xuanjing Huang, Zhongyu Wei

How can we construct an automated debate judge to evaluate an extensive, vibrant, multi-turn debate?

ALaRM: Align Language Models via Hierarchical Rewards Modeling

1 code implementation11 Mar 2024 Yuhang Lai, Siyuan Wang, Shujun Liu, Xuanjing Huang, Zhongyu Wei

We introduce ALaRM, the first framework modeling hierarchical rewards in reinforcement learning from human feedback (RLHF), which is designed to enhance the alignment of large language models (LLMs) with human preferences.

Long Form Question Answering Machine Translation +1

Unveiling the Truth and Facilitating Change: Towards Agent-based Large-scale Social Movement Simulation

1 code implementation26 Feb 2024 Xinyi Mou, Zhongyu Wei, Xuanjing Huang

Simulating the response of the public and forecasting the potential impact has become increasingly important.

User Simulation

CodeChameleon: Personalized Encryption Framework for Jailbreaking Large Language Models

1 code implementation26 Feb 2024 Huijie Lv, Xiao Wang, Yuansen Zhang, Caishuang Huang, Shihan Dou, Junjie Ye, Tao Gui, Qi Zhang, Xuanjing Huang

Adversarial misuse, particularly through `jailbreaking' that circumvents a model's safety and ethical protocols, poses a significant challenge for Large Language Models (LLMs).

Code Completion Response Generation

RoCoIns: Enhancing Robustness of Large Language Models through Code-Style Instructions

no code implementations26 Feb 2024 Yuansen Zhang, Xiao Wang, Zhiheng Xi, Han Xia, Tao Gui, Qi Zhang, Xuanjing Huang

In this paper, drawing inspiration from recent works that LLMs are sensitive to the design of the instructions, we utilize instructions in code style, which are more structural and less ambiguous, to replace typically natural language instructions.

Advancing Parameter Efficiency in Fine-tuning via Representation Editing

2 code implementations23 Feb 2024 Muling Wu, Wenhao Liu, Xiaohua Wang, Tianlong Li, Changze Lv, Zixuan Ling, Jianhao Zhu, Cenyuan Zhang, Xiaoqing Zheng, Xuanjing Huang

Parameter Efficient Fine-Tuning (PEFT) techniques have drawn significant attention due to their ability to yield competitive results while updating only a small portion of the adjustable parameters.

parameter-efficient fine-tuning

LLM-DA: Data Augmentation via Large Language Models for Few-Shot Named Entity Recognition

no code implementations22 Feb 2024 Junjie Ye, Nuo Xu, Yikun Wang, Jie zhou, Qi Zhang, Tao Gui, Xuanjing Huang

To overcome the limitations of existing data augmentation methods that compromise semantic integrity and address the uncertainty inherent in LLM-generated text, we leverage the distinctive characteristics of the NER task by augmenting the original data at both the contextual and entity levels.

Data Augmentation few-shot-ner +5

Domain Generalization via Causal Adjustment for Cross-Domain Sentiment Analysis

no code implementations22 Feb 2024 Siyin Wang, Jie zhou, Qin Chen, Qi Zhang, Tao Gui, Xuanjing Huang

Domain adaption has been widely adapted for cross-domain sentiment analysis to transfer knowledge from the source domain to the target domain.

Domain Generalization Sentiment Analysis

Unveiling Linguistic Regions in Large Language Models

1 code implementation22 Feb 2024 Zhihao Zhang, Jun Zhao, Qi Zhang, Tao Gui, Xuanjing Huang

Furthermore, this core region exhibits significant dimensional dependence, perturbations to even a single parameter on specific dimensions leading to a loss of linguistic competence.

SoMeLVLM: A Large Vision Language Model for Social Media Processing

no code implementations20 Feb 2024 Xinnong Zhang, Haoyu Kuang, Xinyi Mou, Hanjia Lyu, Kun Wu, Siming Chen, Jiebo Luo, Xuanjing Huang, Zhongyu Wei

The powerful Large Vision Language Models make it possible to handle a variety of tasks simultaneously, but even with carefully designed prompting methods, the general domain models often fall short in aligning with the unique speaking style and context of social media tasks.

Language Modeling Language Modelling

Automating Dataset Updates Towards Reliable and Timely Evaluation of Large Language Models

no code implementations19 Feb 2024 Jiahao Ying, Yixin Cao, Yushi Bai, Qianru Sun, Bo wang, Wei Tang, Zhaojun Ding, Yizhe Yang, Xuanjing Huang, Shuicheng Yan

There are two updating strategies: 1) mimicking strategy to generate similar samples based on original data, preserving stylistic and contextual essence, and 2) extending strategy that further expands existing samples at varying cognitive levels by adapting Bloom's taxonomy of educational objectives.

MMLU

LongAgent: Scaling Language Models to 128k Context through Multi-Agent Collaboration

1 code implementation18 Feb 2024 Jun Zhao, Can Zu, Hao Xu, Yi Lu, wei he, Yiwen Ding, Tao Gui, Qi Zhang, Xuanjing Huang

Large language models (LLMs) have demonstrated impressive performance in understanding language and executing complex reasoning tasks.

Multi-hop Question Answering Question Answering +1

Advancing Translation Preference Modeling with RLHF: A Step Towards Cost-Effective Solution

no code implementations18 Feb 2024 Nuo Xu, Jun Zhao, Can Zu, Sixian Li, Lu Chen, Zhihao Zhang, Rui Zheng, Shihan Dou, Wenjuan Qin, Tao Gui, Qi Zhang, Xuanjing Huang

To address this issue, we propose a cost-effective preference learning strategy, optimizing reward models by distinguishing between human and machine translations.

Machine Translation Translation

Benchmark Self-Evolving: A Multi-Agent Framework for Dynamic LLM Evaluation

1 code implementation18 Feb 2024 Siyuan Wang, Zhuohan Long, Zhihao Fan, Zhongyu Wei, Xuanjing Huang

Towards a more scalable, robust and fine-grained evaluation, we implement six reframing operations to construct evolving instances testing LLMs against diverse queries, data noise and probing their problem-solving sub-abilities.

Model Selection

LLM can Achieve Self-Regulation via Hyperparameter Aware Generation

no code implementations17 Feb 2024 Siyin Wang, ShiMin Li, Tianxiang Sun, Jinlan Fu, Qinyuan Cheng, Jiasheng Ye, Junjie Ye, Xipeng Qiu, Xuanjing Huang

HAG extends the current paradigm in the text generation process, highlighting the feasibility of endowing the LLMs with self-regulate decoding strategies.

Text Generation

ToolSword: Unveiling Safety Issues of Large Language Models in Tool Learning Across Three Stages

1 code implementation16 Feb 2024 Junjie Ye, Sixian Li, Guanyu Li, Caishuang Huang, Songyang Gao, Yilong Wu, Qi Zhang, Tao Gui, Xuanjing Huang

Tool learning is widely acknowledged as a foundational approach or deploying large language models (LLMs) in real-world scenarios.

LongHeads: Multi-Head Attention is Secretly a Long Context Processor

1 code implementation16 Feb 2024 Yi Lu, Xin Zhou, wei he, Jun Zhao, Tao Ji, Tao Gui, Qi Zhang, Xuanjing Huang

Instead of allowing each head to attend to the full sentence, which struggles with generalizing to longer sequences due to out-of-distribution (OOD) issues, we allow each head to process in-distribution length by selecting and attending to important context chunks.

Sentence

Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning

1 code implementation8 Feb 2024 Zhiheng Xi, Wenxiang Chen, Boyang Hong, Senjie Jin, Rui Zheng, wei he, Yiwen Ding, Shichun Liu, Xin Guo, Junzhe Wang, Honglin Guo, Wei Shen, Xiaoran Fan, Yuhao Zhou, Shihan Dou, Xiao Wang, Xinbo Zhang, Peng Sun, Tao Gui, Qi Zhang, Xuanjing Huang

In this paper, we propose R$^3$: Learning Reasoning through Reverse Curriculum Reinforcement Learning (RL), a novel method that employs only outcome supervision to achieve the benefits of process supervision for large language models.

GSM8K reinforcement-learning +1

Are Large Language Models Good Prompt Optimizers?

1 code implementation3 Feb 2024 Ruotian Ma, Xiaolei Wang, Xin Zhou, Jian Li, Nan Du, Tao Gui, Qi Zhang, Xuanjing Huang

Despite the success, the underlying mechanism of this approach remains unexplored, and the true effectiveness of LLMs as Prompt Optimizers requires further validation.

valid

Efficient and Effective Time-Series Forecasting with Spiking Neural Networks

1 code implementation2 Feb 2024 Changze Lv, Yansen Wang, Dongqi Han, Xiaoqing Zheng, Xuanjing Huang, Dongsheng Li

In this paper, we propose a framework for SNNs in time-series forecasting tasks, leveraging the efficiency of spiking neurons in processing temporal information.

Model Selection Time Series +1

CauESC: A Causal Aware Model for Emotional Support Conversation

no code implementations31 Jan 2024 Wei Chen, Hengxu Lin, Qun Zhang, Xiaojin Zhang, Xiang Bai, Xuanjing Huang, Zhongyu Wei

Emotional Support Conversation aims at reducing the seeker's emotional distress through supportive response.

F-Eval: Assessing Fundamental Abilities with Refined Evaluation Methods

2 code implementations26 Jan 2024 Yu Sun, Keyu Chen, Shujie Wang, Peiji Li, Qipeng Guo, Hang Yan, Xipeng Qiu, Xuanjing Huang, Dahua Lin

However, these evaluation benchmarks are limited to assessing the instruction-following capabilities, overlooking the fundamental abilities that emerge during the pre-training stage.

Instruction Following

RoTBench: A Multi-Level Benchmark for Evaluating the Robustness of Large Language Models in Tool Learning

1 code implementation16 Jan 2024 Junjie Ye, Yilong Wu, Songyang Gao, Caishuang Huang, Sixian Li, Guanyu Li, Xiaoran Fan, Qi Zhang, Tao Gui, Xuanjing Huang

To bridge this gap, we introduce RoTBench, a multi-level benchmark for evaluating the robustness of LLMs in tool learning.

Revisiting Jailbreaking for Large Language Models: A Representation Engineering Perspective

no code implementations12 Jan 2024 Tianlong Li, Zhenghua Wang, Wenhao Liu, Muling Wu, Shihan Dou, Changze Lv, Xiaohua Wang, Xiaoqing Zheng, Xuanjing Huang

The recent surge in jailbreaking attacks has revealed significant vulnerabilities in Large Language Models (LLMs) when exposed to malicious inputs.

Prompt Engineering

LLaMA Beyond English: An Empirical Study on Language Capability Transfer

no code implementations2 Jan 2024 Jun Zhao, Zhihao Zhang, Luhui Gao, Qi Zhang, Tao Gui, Xuanjing Huang

In recent times, substantial advancements have been witnessed in large language models (LLMs), exemplified by ChatGPT, showcasing remarkable proficiency across a range of complex tasks.

Informativeness MMLU +1

ToolEyes: Fine-Grained Evaluation for Tool Learning Capabilities of Large Language Models in Real-world Scenarios

1 code implementation1 Jan 2024 Junjie Ye, Guanyu Li, Songyang Gao, Caishuang Huang, Yilong Wu, Sixian Li, Xiaoran Fan, Shihan Dou, Tao Ji, Qi Zhang, Tao Gui, Xuanjing Huang

Existing evaluations of tool learning primarily focus on validating the alignment of selected tools for large language models (LLMs) with expected outcomes.

Aligning Large Language Models with Human Preferences through Representation Engineering

1 code implementation26 Dec 2023 Wenhao Liu, Xiaohua Wang, Muling Wu, Tianlong Li, Changze Lv, Zixuan Ling, Jianhao Zhu, Cenyuan Zhang, Xiaoqing Zheng, Xuanjing Huang

Aligning large language models (LLMs) with human preferences is crucial for enhancing their utility in terms of helpfulness, truthfulness, safety, harmlessness, and interestingness.

Argue with Me Tersely: Towards Sentence-Level Counter-Argument Generation

1 code implementation21 Dec 2023 Jiayu Lin, Rong Ye, Meng Han, Qi Zhang, Ruofei Lai, Xinyu Zhang, Zhao Cao, Xuanjing Huang, Zhongyu Wei

The results show the competitiveness of our proposed framework and evaluator in counter-argument generation tasks.

Sentence

A Soft Contrastive Learning-based Prompt Model for Few-shot Sentiment Analysis

no code implementations16 Dec 2023 Jingyi Zhou, Jie zhou, Jiabao Zhao, Siyin Wang, Haijun Shan, Gui Tao, Qi Zhang, Xuanjing Huang

Few-shot text classification has attracted great interest in both academia and industry due to the lack of labeled data in many fields.

Contrastive Learning Few-Shot Text Classification +4

K-ESConv: Knowledge Injection for Emotional Support Dialogue Systems via Prompt Learning

no code implementations16 Dec 2023 Wei Chen, Gang Zhao, Xiaojin Zhang, Xiang Bai, Xuanjing Huang, Zhongyu Wei

Automatic psychological counseling requires mass of professional knowledge that can be found in online counseling forums.

Diversity Prompt Learning +1

LoRAMoE: Alleviate World Knowledge Forgetting in Large Language Models via MoE-Style Plugin

1 code implementation15 Dec 2023 Shihan Dou, Enyu Zhou, Yan Liu, Songyang Gao, Jun Zhao, Wei Shen, Yuhao Zhou, Zhiheng Xi, Xiao Wang, Xiaoran Fan, ShiLiang Pu, Jiang Zhu, Rui Zheng, Tao Gui, Qi Zhang, Xuanjing Huang

Supervised fine-tuning (SFT) is a crucial step for large language models (LLMs), enabling them to align with human instructions and enhance their capabilities in downstream tasks.

Language Modelling Mixture-of-Experts +2

Making Harmful Behaviors Unlearnable for Large Language Models

no code implementations2 Nov 2023 Xin Zhou, Yi Lu, Ruotian Ma, Tao Gui, Qi Zhang, Xuanjing Huang

Specifically, we introduce ``security vectors'', a few new parameters that can be separated from the LLM, to ensure LLM's responses are consistent with the harmful behavior.

Tailoring Personality Traits in Large Language Models via Unsupervisedly-Built Personalized Lexicons

no code implementations25 Oct 2023 Tianlong Li, Shihan Dou, Changze Lv, Wenhao Liu, Jianhan Xu, Muling Wu, Zixuan Ling, Xiaoqing Zheng, Xuanjing Huang

Users can utilize UBPL to adjust the probability vectors of predicted words in the decoding phase of LLMs, thus influencing the personality expression of LLMs.

Unveiling A Core Linguistic Region in Large Language Models

no code implementations23 Oct 2023 Jun Zhao, Zhihao Zhang, Yide Ma, Qi Zhang, Tao Gui, Luhui Gao, Xuanjing Huang

We have discovered a core region in LLMs that corresponds to linguistic competence, accounting for approximately 1% of the total model parameters.

Orthogonal Subspace Learning for Language Model Continual Learning

1 code implementation22 Oct 2023 Xiao Wang, Tianze Chen, Qiming Ge, Han Xia, Rong Bao, Rui Zheng, Qi Zhang, Tao Gui, Xuanjing Huang

In this paper, we propose orthogonal low-rank adaptation (O-LoRA), a simple and efficient approach for continual learning in language models, effectively mitigating catastrophic forgetting while learning new tasks.

Continual Learning Language Modeling +2

Are Structural Concepts Universal in Transformer Language Models? Towards Interpretable Cross-Lingual Generalization

1 code implementation19 Oct 2023 Ningyu Xu, Qi Zhang, Jingting Ye, Menghan Zhang, Xuanjing Huang

We then propose a meta-learning-based method to learn to align conceptual spaces of different languages, which facilitates zero-shot and few-shot generalization in concept classification and also offers insights into the cross-lingual in-context learning phenomenon.

Decoder In-Context Learning +2

RealBehavior: A Framework for Faithfully Characterizing Foundation Models' Human-like Behavior Mechanisms

no code implementations17 Oct 2023 Enyu Zhou, Rui Zheng, Zhiheng Xi, Songyang Gao, Xiaoran Fan, Zichu Fei, Jingting Ye, Tao Gui, Qi Zhang, Xuanjing Huang

Reports of human-like behaviors in foundation models are growing, with psychological theories providing enduring tools to investigate these behaviors.

Efficient Link Prediction via GNN Layers Induced by Negative Sampling

1 code implementation14 Oct 2023 Yuxin Wang, Xiannian Hu, Quan Gan, Xuanjing Huang, Xipeng Qiu, David Wipf

In contrast, \emph{edge-wise} methods rely on the formation of edge-specific subgraph embeddings to enrich the representation of pair-wise relationships, disambiguating isomorphic nodes to improve accuracy, but with increased model complexity.

Decoder Link Prediction +1

SpikeCLIP: A Contrastive Language-Image Pretrained Spiking Neural Network

no code implementations10 Oct 2023 Tianlong Li, Wenhao Liu, Changze Lv, Yufei Gu, Jianhan Xu, Cenyuan Zhang, Muling Wu, Xiaoqing Zheng, Xuanjing Huang

Spiking Neural Networks (SNNs) have emerged as a promising alternative to conventional Artificial Neural Networks (ANNs), demonstrating comparable performance in both visual and linguistic tasks while offering the advantage of improved energy efficiency.

Image Classification

Loose lips sink ships: Mitigating Length Bias in Reinforcement Learning from Human Feedback

no code implementations8 Oct 2023 Wei Shen, Rui Zheng, WenYu Zhan, Jun Zhao, Shihan Dou, Tao Gui, Qi Zhang, Xuanjing Huang

Reinforcement learning from human feedback serves as a crucial bridge, aligning large language models with human and societal values.

Language Modeling Language Modelling

DISC-MedLLM: Bridging General Large Language Models and Real-World Medical Consultation

1 code implementation28 Aug 2023 Zhijie Bao, Wei Chen, Shengze Xiao, Kuang Ren, Jiaao Wu, Cheng Zhong, Jiajie Peng, Xuanjing Huang, Zhongyu Wei

We propose DISC-MedLLM, a comprehensive solution that leverages Large Language Models (LLMs) to provide accurate and truthful medical response in end-to-end conversational healthcare services.

Knowledge Graphs

On the Universal Adversarial Perturbations for Efficient Data-free Adversarial Detection

1 code implementation27 Jun 2023 Songyang Gao, Shihan Dou, Qi Zhang, Xuanjing Huang, Jin Ma, Ying Shan

Detecting adversarial samples that are carefully crafted to fool the model is a critical step to socially-secure applications.

text-classification Text Classification

From Hypergraph Energy Functions to Hypergraph Neural Networks

1 code implementation16 Jun 2023 Yuxin Wang, Quan Gan, Xipeng Qiu, Xuanjing Huang, David Wipf

Hypergraphs are a powerful abstraction for representing higher-order interactions between entities of interest.

Bilevel Optimization Graph Neural Network +1

Open Set Relation Extraction via Unknown-Aware Training

1 code implementation8 Jun 2023 Jun Zhao, Xin Zhao, WenYu Zhan, Qi Zhang, Tao Gui, Zhongyu Wei, Yunwen Chen, Xiang Gao, Xuanjing Huang

Inspired by text adversarial attacks, we adaptively apply small but critical perturbations to original training instances and thus synthesizing negative instances that are more likely to be mistaken by the model as known relations.

Relation Relation Extraction

Do Large Language Models Know What They Don't Know?

1 code implementation29 May 2023 Zhangyue Yin, Qiushi Sun, Qipeng Guo, Jiawen Wu, Xipeng Qiu, Xuanjing Huang

Large language models (LLMs) have a wealth of knowledge that allows them to excel in various Natural Language Processing (NLP) tasks.

In-Context Learning

KNSE: A Knowledge-aware Natural Language Inference Framework for Dialogue Symptom Status Recognition

no code implementations26 May 2023 Wei Chen, Shiqi Wei, Zhongyu Wei, Xuanjing Huang

Symptom diagnosis in medical conversations aims to correctly extract both symptom entities and their status from the doctor-patient dialogue.

Natural Language Inference Triplet

Self-Polish: Enhance Reasoning in Large Language Models via Problem Refinement

1 code implementation23 May 2023 Zhiheng Xi, Senjie Jin, Yuhao Zhou, Rui Zheng, Songyang Gao, Tao Gui, Qi Zhang, Xuanjing Huang

To enhance the multi-step reasoning capabilities of large language models, researchers have extensively explored prompting methods, notably the Chain-of-Thought (CoT) method which explicitly elicits human-like rationales.

GSM8K

Query Structure Modeling for Inductive Logical Reasoning Over Knowledge Graphs

1 code implementation23 May 2023 Siyuan Wang, Zhongyu Wei, Meng Han, Zhihao Fan, Haijun Shan, Qi Zhang, Xuanjing Huang

The results demonstrate the effectiveness of our method on logical reasoning over KGs in both inductive and transductive settings.

Knowledge Graphs Logical Reasoning

A Confidence-based Partial Label Learning Model for Crowd-Annotated Named Entity Recognition

1 code implementation21 May 2023 Limao Xiong, Jie zhou, Qunxi Zhu, Xiao Wang, Yuanbin Wu, Qi Zhang, Tao Gui, Xuanjing Huang, Jin Ma, Ying Shan

Particularly, we propose a Confidence-based Partial Label Learning (CPLL) method to integrate the prior confidence (given by annotators) and posterior confidences (learned by models) for crowd-annotated NER.

named-entity-recognition Named Entity Recognition +2

Modeling the Q-Diversity in a Min-max Play Game for Robust Optimization

1 code implementation20 May 2023 Ting Wu, Rui Zheng, Tao Gui, Qi Zhang, Xuanjing Huang

Models trained with empirical risk minimization (ERM) are revealed to easily rely on spurious correlations, resulting in poor generalization.

Diversity Out-of-Distribution Generalization +2

CodeIE: Large Code Generation Models are Better Few-Shot Information Extractors

1 code implementation9 May 2023 Peng Li, Tianxiang Sun, Qiong Tang, Hang Yan, Yuanbin Wu, Xuanjing Huang, Xipeng Qiu

A common practice is to recast the task into a text-to-text format such that generative LLMs of natural language (NL-LLMs) like GPT-3 can be prompted to solve it.

Code Generation Few-Shot Learning +4

CausalAPM: Generalizable Literal Disentanglement for NLU Debiasing

no code implementations4 May 2023 Songyang Gao, Shihan Dou, Junjie Shan, Qi Zhang, Xuanjing Huang

Dataset bias, i. e., the over-reliance on dataset-specific literal heuristics, is getting increasing attention for its detrimental effect on the generalization ability of NLU models.

Causal Inference Disentanglement +2

A Comprehensive Capability Analysis of GPT-3 and GPT-3.5 Series Models

no code implementations18 Mar 2023 Junjie Ye, Xuanting Chen, Nuo Xu, Can Zu, Zekai Shao, Shichun Liu, Yuhan Cui, Zeyang Zhou, Chao Gong, Yang shen, Jie zhou, Siming Chen, Tao Gui, Qi Zhang, Xuanjing Huang

GPT series models, such as GPT-3, CodeX, InstructGPT, ChatGPT, and so on, have gained considerable attention due to their exceptional natural language processing capabilities.

Natural Language Understanding

How Robust is GPT-3.5 to Predecessors? A Comprehensive Study on Language Understanding Tasks

no code implementations1 Mar 2023 Xuanting Chen, Junjie Ye, Can Zu, Nuo Xu, Rui Zheng, Minlong Peng, Jie zhou, Tao Gui, Qi Zhang, Xuanjing Huang

The GPT-3. 5 models have demonstrated impressive performance in various Natural Language Processing (NLP) tasks, showcasing their strong understanding and reasoning capabilities.

Natural Language Inference Natural Language Understanding +1

Cross-Linguistic Syntactic Difference in Multilingual BERT: How Good is It and How Does It Affect Transfer?

1 code implementation21 Dec 2022 Ningyu Xu, Tao Gui, Ruotian Ma, Qi Zhang, Jingting Ye, Menghan Zhang, Xuanjing Huang

We demonstrate that the distance between the distributions of different languages is highly consistent with the syntactic difference in terms of linguistic formalisms.

Diversity Zero-Shot Cross-Lingual Transfer

Rethinking Label Smoothing on Multi-hop Question Answering

2 code implementations19 Dec 2022 Zhangyue Yin, Yuxin Wang, Xiannian Hu, Yiguang Wu, Hang Yan, Xinyu Zhang, Zhao Cao, Xuanjing Huang, Xipeng Qiu

Multi-Hop Question Answering (MHQA) is a significant area in question answering, requiring multiple reasoning components, including document retrieval, supporting sentence prediction, and answer span extraction.

Image Classification Machine Reading Comprehension +6

Efficient Adversarial Training with Robust Early-Bird Tickets

1 code implementation14 Nov 2022 Zhiheng Xi, Rui Zheng, Tao Gui, Qi Zhang, Xuanjing Huang

Adversarial training is one of the most powerful methods to improve the robustness of pre-trained language models (PLMs).

Robust Lottery Tickets for Pre-trained Language Models

2 code implementations ACL 2022 Rui Zheng, Rong Bao, Yuhao Zhou, Di Liang, Sirui Wang, Wei Wu, Tao Gui, Qi Zhang, Xuanjing Huang

Recent works on Lottery Ticket Hypothesis have shown that pre-trained language models (PLMs) contain smaller matching subnetworks(winning tickets) which are capable of reaching accuracy comparable to the original models.

Adversarial Robustness

Late Prompt Tuning: A Late Prompt Could Be Better Than Many Prompts

1 code implementation20 Oct 2022 Xiangyang Liu, Tianxiang Sun, Xuanjing Huang, Xipeng Qiu

Through extensive experimental results across various tasks and PTMs, we show that LPT can achieve competitive performance to full model tuning and other PETuning methods under both full-data and few-shot scenarios while possessing faster training speed and lower memory cost.

Kernel-Whitening: Overcome Dataset Bias with Isotropic Sentence Embedding

2 code implementations14 Oct 2022 Songyang Gao, Shihan Dou, Qi Zhang, Xuanjing Huang

Dataset bias has attracted increasing attention recently for its detrimental effect on the generalization ability of fine-tuned models.

Sentence Sentence Embedding +2

COLO: A Contrastive Learning based Re-ranking Framework for One-Stage Summarization

1 code implementation COLING 2022 Chenxin An, Ming Zhong, Zhiyong Wu, Qin Zhu, Xuanjing Huang, Xipeng Qiu

Traditional training paradigms for extractive and abstractive summarization systems always only use token-level or sentence-level training objectives.

Abstractive Text Summarization Contrastive Learning +2

Locate Then Ask: Interpretable Stepwise Reasoning for Multi-hop Question Answering

1 code implementation COLING 2022 Siyuan Wang, Zhongyu Wei, Zhihao Fan, Qi Zhang, Xuanjing Huang

In this paper, we propose an interpretable stepwise reasoning framework to incorporate both single-hop supporting sentence identification and single-hop question generation at each intermediate step, and utilize the inference of the current hop for the next until reasoning out the final result.

Multi-hop Question Answering Question Answering +3

Causal Intervention Improves Implicit Sentiment Analysis

no code implementations COLING 2022 Siyin Wang, Jie zhou, Changzhi Sun, Junjie Ye, Tao Gui, Qi Zhang, Xuanjing Huang

In this work, we propose a causal intervention model for Implicit Sentiment Analysis using Instrumental Variable (ISAIV).

Sentence Sentiment Analysis

CoNT: Contrastive Neural Text Generation

2 code implementations29 May 2022 Chenxin An, Jiangtao Feng, Kai Lv, Lingpeng Kong, Xipeng Qiu, Xuanjing Huang

We validate CoNT on five generation tasks with ten benchmarks, including machine translation, summarization, code comment generation, data-to-text generation and commonsense generation.

Code Comment Generation Comment Generation +4

What Dense Graph Do You Need for Self-Attention?

1 code implementation27 May 2022 Yuxin Wang, Chu-Tak Lee, Qipeng Guo, Zhangyue Yin, Yunhua Zhou, Xuanjing Huang, Xipeng Qiu

Transformers have made progress in miscellaneous tasks, but suffer from quadratic computational and memory complexities.

Miscellaneous

BBTv2: Towards a Gradient-Free Future with Large Language Models

1 code implementation23 May 2022 Tianxiang Sun, Zhengfu He, Hong Qian, Yunhua Zhou, Xuanjing Huang, Xipeng Qiu

By contrast, gradient-free methods only require the forward computation of the PTM to tune the prompt, retaining the benefits of efficient tuning and deployment.

Few-Shot Learning Language Modelling

A Benchmark for Automatic Medical Consultation System: Frameworks, Tasks and Datasets

1 code implementation19 Apr 2022 Wei Chen, Zhiwei Li, Hongyi Fang, Qianyuan Yao, Cheng Zhong, Jianye Hao, Qi Zhang, Xuanjing Huang, Jiajie Peng, Zhongyu Wei

In recent years, interest has arisen in using machine learning to improve the efficiency of automatic medical consultation and enhance patient experience.

Dialogue Act Classification Dialogue Understanding +4

A Simple Hash-Based Early Exiting Approach For Language Understanding and Generation

1 code implementation Findings (ACL) 2022 Tianxiang Sun, Xiangyang Liu, Wei Zhu, Zhichao Geng, Lingling Wu, Yilong He, Yuan Ni, Guotong Xie, Xuanjing Huang, Xipeng Qiu

Previous works usually adopt heuristic metrics such as the entropy of internal outputs to measure instance difficulty, which suffers from generalization and threshold-tuning.

Decorrelate Irrelevant, Purify Relevant: Overcome Textual Spurious Correlations from a Feature Perspective

2 code implementations COLING 2022 Shihan Dou, Rui Zheng, Ting Wu, Songyang Gao, Junjie Shan, Qi Zhang, Yueming Wu, Xuanjing Huang

Most of the existing debiasing methods often identify and weaken these samples with biased features (i. e., superficial surface features that cause such spurious correlations).

Fact Verification Natural Language Inference +1

Black-Box Tuning for Language-Model-as-a-Service

2 code implementations10 Jan 2022 Tianxiang Sun, Yunfan Shao, Hong Qian, Xuanjing Huang, Xipeng Qiu

In such a scenario, which we call Language-Model-as-a-Service (LMaaS), the gradients of PTMs are usually unavailable.

In-Context Learning Language Modeling +1

Plug-Tagger: A Pluggable Sequence Labeling Framework Using Language Models

no code implementations14 Oct 2021 Xin Zhou, Ruotian Ma, Tao Gui, Yiding Tan, Qi Zhang, Xuanjing Huang

Specifically, for each task, a label word set is first constructed by selecting a high-frequency word for each class respectively, and then, task-specific vectors are inserted into the inputs and optimized to manipulate the model predictions towards the corresponding label words.

Language Modelling Text Generation

Towards Efficient NLP: A Standard Evaluation and A Strong Baseline

1 code implementation NAACL 2022 Xiangyang Liu, Tianxiang Sun, Junliang He, Jiawen Wu, Lingling Wu, Xinyu Zhang, Hao Jiang, Zhao Cao, Xuanjing Huang, Xipeng Qiu

ELUE is dedicated to depict the Pareto Frontier for various language understanding tasks, such that it can tell whether and how much a method achieves Pareto improvement.

KNN-BERT: Fine-Tuning Pre-Trained Models with KNN Classifier

1 code implementation6 Oct 2021 Linyang Li, Demin Song, Ruotian Ma, Xipeng Qiu, Xuanjing Huang

Pre-trained models are widely used in fine-tuning downstream tasks with linear classifiers optimized by the cross-entropy loss, which might face robustness and stability problems.

Contrastive Learning text-classification +1

Template-free Prompt Tuning for Few-shot NER

1 code implementation NAACL 2022 Ruotian Ma, Xin Zhou, Tao Gui, Yiding Tan, Linyang Li, Qi Zhang, Xuanjing Huang

Prompt-based methods have been successfully applied in sentence-level few-shot learning tasks, mostly owing to the sophisticated design of templates and label words.

Few-Shot Learning Few-shot NER +1

Paradigm Shift in Natural Language Processing

1 code implementation26 Sep 2021 Tianxiang Sun, Xiangyang Liu, Xipeng Qiu, Xuanjing Huang

In this paper, we review such phenomenon of paradigm shifts in recent years, highlighting several paradigms that have the potential to solve different NLP tasks.

Chunking NER +3

Constructing Phrase-level Semantic Labels to Form Multi-Grained Supervision for Image-Text Retrieval

no code implementations12 Sep 2021 Zhihao Fan, Zhongyu Wei, Zejun Li, Siyuan Wang, Haijun Shan, Xuanjing Huang, Jianqing Fan

Existing research for image text retrieval mainly relies on sentence-level supervision to distinguish matched and mismatched sentences for a query image.

Form Image-text Retrieval +3

Learning to Teach with Student Feedback

no code implementations10 Sep 2021 Yitao Liu, Tianxiang Sun, Xipeng Qiu, Xuanjing Huang

This one-way interaction leads to the teacher's inability to perceive the characteristics of the student and its training progress.

Knowledge Distillation

Defense against Synonym Substitution-based Adversarial Attacks via Dirichlet Neighborhood Ensemble

1 code implementation ACL 2021 Yi Zhou, Xiaoqing Zheng, Cho-Jui Hsieh, Kai-Wei Chang, Xuanjing Huang

Although deep neural networks have achieved prominent performance on many NLP tasks, they are vulnerable to adversarial examples.

Sentence

Math Word Problem Solving with Explicit Numerical Values

1 code implementation ACL 2021 Qinzhuo Wu, Qi Zhang, Zhongyu Wei, Xuanjing Huang

In recent years, math word problem solving has received considerable attention and achieved promising results, but previous methods rarely take numerical values into consideration.

Math Math Word Problem Solving

SENT: Sentence-level Distant Relation Extraction via Negative Training

1 code implementation ACL 2021 Ruotian Ma, Tao Gui, Linyang Li, Qi Zhang, Yaqian Zhou, Xuanjing Huang

In this work, we propose the use of negative training (NT), in which a model is trained using complementary labels regarding that ``the instance does not belong to these complementary labels".

Relation Relation Extraction +1

Cannot find the paper you are looking for? You can Submit a new open access paper.