Search Results for author: Benfeng Xu

Found 11 papers, 9 papers with code

EmRel: Joint Representation of Entities and Embedded Relations for Multi-triple Extraction

1 code implementation NAACL 2022 Benfeng Xu, Quan Wang, Yajuan Lyu, Yabing Shi, Yong Zhu, Jie Gao, Zhendong Mao

Multi-triple extraction is a challenging task due to the existence of informative inter-triple correlations, and consequently rich interactions across the constituent entities and relations. While existing works only explore entity representations, we propose to explicitly introduce relation representation, jointly represent it with entities, and novelly align them to identify valid triples. We perform comprehensive experiments on document-level relation extraction and joint entity and relation extraction along with ablations to demonstrate the advantage of the proposed method.

Document-level Relation Extraction Joint Entity and Relation Extraction +2

Benchmarking Large Language Models on Controllable Generation under Diversified Instructions

1 code implementation1 Jan 2024 Yihan Chen, Benfeng Xu, Quan Wang, Yi Liu, Zhendong Mao

While large language models (LLMs) have exhibited impressive instruction-following capabilities, it is still unclear whether and to what extent they can respond to explicit constraints that might be entailed in various instructions.

Benchmarking Instruction Following +1

On the Calibration of Large Language Models and Alignment

no code implementations22 Nov 2023 Chiwei Zhu, Benfeng Xu, Quan Wang, Yongdong Zhang, Zhendong Mao

As large language models attract increasing attention and find widespread application, concurrent challenges of reliability also arise at the same time.

Self-Evolved Diverse Data Sampling for Efficient Instruction Tuning

1 code implementation14 Nov 2023 Shengguang Wu, Keming Lu, Benfeng Xu, Junyang Lin, Qi Su, Chang Zhou

The key to our data sampling technique lies in the enhancement of diversity in the chosen subsets, as the model selects new data points most distinct from any existing ones according to its current embedding space.

Instruction Following

ExpertPrompting: Instructing Large Language Models to be Distinguished Experts

2 code implementations24 May 2023 Benfeng Xu, An Yang, Junyang Lin, Quan Wang, Chang Zhou, Yongdong Zhang, Zhendong Mao

The answering quality of an aligned large language model (LLM) can be drastically improved if treated with proper crafting of prompts.

In-Context Learning Instruction Following +2

$k$NN Prompting: Beyond-Context Learning with Calibration-Free Nearest Neighbor Inference

1 code implementation24 Mar 2023 Benfeng Xu, Quan Wang, Zhendong Mao, Yajuan Lyu, Qiaoqiao She, Yongdong Zhang

In-Context Learning (ICL), which formulates target tasks as prompt completion conditioned on in-context demonstrations, has become the prevailing utilization of LLMs.

In-Context Learning

UniRel: Unified Representation and Interaction for Joint Relational Triple Extraction

1 code implementation16 Nov 2022 Wei Tang, Benfeng Xu, Yuyue Zhao, Zhendong Mao, Yifeng Liu, Yong Liao, Haiyong Xie

Relational triple extraction is challenging for its difficulty in capturing rich correlations between entities and relations.

Relation Extraction

Building Chinese Biomedical Language Models via Multi-Level Text Discrimination

1 code implementation14 Oct 2021 Quan Wang, Songtai Dai, Benfeng Xu, Yajuan Lyu, Yong Zhu, Hua Wu, Haifeng Wang

In this work we introduce eHealth, a Chinese biomedical PLM built from scratch with a new pre-training framework.

Domain Adaptation

Curriculum Learning for Natural Language Understanding

no code implementations ACL 2020 Benfeng Xu, Licheng Zhang, Zhendong Mao, Quan Wang, Hongtao Xie, Yongdong Zhang

With the great success of pre-trained language models, the pretrain-finetune paradigm now becomes the undoubtedly dominant solution for natural language understanding (NLU) tasks.

Natural Language Understanding

Cannot find the paper you are looking for? You can Submit a new open access paper.