no code implementations • 19 Aug 2024 • Qizhou Chen, Taolin Zhang, Chengyu Wang, Xiaofeng He, Dakan Wang, Tingting Liu
Recent research discovered that the mid-layer representation of the subject's final token in a prompt has a strong influence on factual predictions, and developed Large Language Model (LLM) editing techniques based on this observation.
1 code implementation • 24 Jun 2024 • Dongyang Li, Taolin Zhang, Jiali Deng, Longtao Huang, Chengyu Wang, Xiaofeng He, Hui Xue
Specifically, to retrieve the tokens with similar meanings for the semantic data augmentation across different languages, we propose a sequential clustering process in 3 stages: within a single language, across multiple languages of a language family, and across languages from multiple language families.
1 code implementation • 24 Jun 2024 • Dongyang Li, Taolin Zhang, Longtao Huang, Chengyu Wang, Xiaofeng He, Hui Xue
Knowledge-enhanced pre-trained language models (KEPLMs) leverage relation triples from knowledge graphs (KGs) and integrate these external data sources into language models via self-supervised learning.
no code implementations • 24 Jun 2024 • Dongyang Li, Junbing Yan, Taolin Zhang, Chengyu Wang, Xiaofeng He, Longtao Huang, Hui Xue, Jun Huang
Retrieval augmented generation (RAG) exhibits outstanding performance in promoting the knowledge capabilities of large language models (LLMs) with retrieved documents related to user queries.
1 code implementation • 31 May 2024 • Taolin Zhang, Qizhou Chen, Dongyang Li, Chengyu Wang, Xiaofeng He, Longtao Huang, Hui Xue, Jun Huang
(2) Considering that auxiliary parameters are required to store the knowledge for sequential editing, we construct a new dataset named \textbf{DAFSet}, fulfilling recent, popular, long-tail and robust properties to enhance the generality of sequential editing.
no code implementations • 6 May 2024 • Qizhou Chen, Taolin Zhang, Xiaofeng He, Dongyang Li, Chengyu Wang, Longtao Huang, Hui Xue
Model editing aims to correct outdated or erroneous knowledge in large language models (LLMs) without the need for costly retraining.
no code implementations • 4 May 2024 • Taolin Zhang, Dongyang Li, Qizhou Chen, Chengyu Wang, Longtao Huang, Hui Xue, Xiaofeng He, Jun Huang
The reordering learning process is divided into two steps according to the quality of the generated responses: document order adjustment and document representation enhancement.
1 code implementation • 25 Mar 2024 • Qian Chen, Dongyang Li, Xiaofeng He, Hongzhao Li, Hongyu Yi
The research focus has shifted to Hierarchical Attribution (HA) for its ability to model feature interactions.
no code implementations • 17 Mar 2024 • Junbing Yan, Chengyu Wang, Taolin Zhang, Xiaofeng He, Jun Huang, Longtao Huang, Hui Xue, Wei zhang
KEPLMs are pre-trained models that utilize external knowledge to enhance language understanding.
no code implementations • 17 Jan 2024 • Hao Qu, Lilian Zhang, Jun Mao, Junbo Tie, Xiaofeng He, Xiaoping Hu, Yifei Shi, Changhao Chen
The performance of visual SLAM in complex, real-world scenarios is often compromised by unreliable feature extraction and matching when using handcrafted features.
1 code implementation • 13 Dec 2023 • Qian Chen, Taolin Zhang, Dongyang Li, Xiaofeng He
The minimal feature removal problem in the post-hoc explanation area aims to identify the minimal feature set (MFS).
no code implementations • 12 Nov 2023 • Junbing Yan, Chengyu Wang, Taolin Zhang, Xiaofeng He, Jun Huang, Wei zhang
Reasoning is a distinctive human capacity, enabling us to address complex problems by breaking them down into a series of manageable cognitive steps.
no code implementations • 12 Nov 2023 • Ruyao Xu, Taolin Zhang, Chengyu Wang, Zhongjie Duan, Cen Chen, Minghui Qiu, Dawei Cheng, Xiaofeng He, Weining Qian
In the experiments, we evaluate KANGAROO over various knowledge-aware and general NLP tasks in both full and few-shot learning settings, outperforming various KEPLM training paradigms performance in closed-domains significantly.
no code implementations • 11 May 2023 • Dongyang Li, Ruixue Ding, Qiang Zhang, Zheng Li, Boli Chen, Pengjun Xie, Yao Xu, Xin Li, Ning Guo, Fei Huang, Xiaofeng He
With a fast developing pace of geographic applications, automatable and intelligent models are essential to be designed to handle the large volume of information.
no code implementations • 16 Nov 2022 • Hao Qu, Lilian Zhang, Xiaoping Hu, Xiaofeng He, Xianfei Pan, Changhao Chen
To address this, we propose SelfOdom, a self-supervised dual-network framework that can robustly and consistently learn and generate pose and depth estimates in global scale from monocular images.
1 code implementation • 11 Oct 2022 • Taolin Zhang, Junwei DOng, Jianing Wang, Chengyu Wang, Ang Wang, Yinghui Liu, Jun Huang, Yong Li, Xiaofeng He
Recently, knowledge-enhanced pre-trained language models (KEPLMs) improve context-aware representations via learning from structured relations in knowledge graphs, and/or linguistic knowledge from syntactic or dependency analysis.
1 code implementation • Findings (ACL) 2022 • Dongyang Li, Taolin Zhang, Nan Hu, Chengyu Wang, Xiaofeng He
In this paper, we propose a hierarchical contrastive learning Framework for Distantly Supervised relation extraction (HiCLRE) to reduce noisy sentences, which integrate the global structural information and local fine-grained interaction.
1 code implementation • 2 Dec 2021 • Taolin Zhang, Chengyu Wang, Nan Hu, Minghui Qiu, Chengguang Tang, Xiaofeng He, Jun Huang
Knowledge-Enhanced Pre-trained Language Models (KEPLMs) are pre-trained models with relation triples injecting from knowledge graphs to improve language understanding abilities.
2 code implementations • ACL 2021 • Taolin Zhang, Zerui Cai, Chengyu Wang, Minghui Qiu, Bite Yang, Xiaofeng He
Recently, the performance of Pre-trained Language Models (PLMs) has been significantly improved by injecting knowledge facts to enhance their abilities of language understanding.
3 code implementations • 17 May 2021 • Lu Wang, xiaofu Chang, Shuang Li, Yunfei Chu, Hui Li, Wei zhang, Xiaofeng He, Le Song, Jingren Zhou, Hongxia Yang
Secondly, on top of the proposed graph transformer, we introduce a two-stream encoder that separately extracts representations from temporal neighborhoods associated with the two interaction nodes and then utilizes a co-attentional transformer to model inter-dependencies at a semantic level.
1 code implementation • Findings (ACL) 2021 • Taolin Zhang, Chengyu Wang, Minghui Qiu, Bite Yang, Xiaofeng He, Jun Huang
In this paper, we introduce a multi-target MRC task for the medical domain, whose goal is to predict answers to medical questions and the corresponding support sentences from medical information sources simultaneously, in order to ensure the high reliability of medical knowledge serving.
no code implementations • ACL 2020 • Chengyu Wang, Xiaofeng He
The hypernymy detection task has been addressed under various frameworks.
2 code implementations • EMNLP 2020 • Chengyu Wang, Minghui Qiu, Jun Huang, Xiaofeng He
In this paper, we propose an effective learning procedure named Meta Fine-Tuning (MFT), served as a meta-learner to solve a group of similar NLP tasks for neural language models.
no code implementations • 25 Feb 2020 • Chengyu Wang, Minghui Qiu, Jun Huang, Xiaofeng He
We further combine a meta-learning process over the auxiliary task distribution and supervised learning to train the neural lexical relation classifier.
no code implementations • 11 Feb 2020 • Yun Hua, Xiangfeng Wang, Bo Jin, Wenhao Li, Junchi Yan, Xiaofeng He, Hongyuan Zha
In spite of the success of existing meta reinforcement learning methods, they still have difficulty in learning a meta policy effectively for RL problems with sparse reward.
no code implementations • 4 Oct 2019 • Lu Wang, Wenchao Yu, Wei Wang, Wei Cheng, Wei zhang, Hongyuan Zha, Xiaofeng He, Haifeng Chen
Graph representation learning, aiming to learn low-dimensional representations which capture the geometric dependencies between nodes in the original graph, has gained increasing popularity in a variety of graph analysis tasks, including node classification and link prediction.
no code implementations • ACL 2019 • Chengyu Wang, Xiaofeng He, Aoying Zhou
Lexical relations describe how meanings of terms relate to each other.
no code implementations • COLING 2018 • Yan Fan, Chengyu Wang, Xiaofeng He
The goal is to learn a classifier on pre-defined relations and discover new relations expressed in texts.
no code implementations • 4 Jul 2018 • Lu Wang, Wei zhang, Xiaofeng He, Hongyuan Zha
Prior relevant studies recommend treatments either use supervised learning (e. g. matching the indicator signal which denotes doctor prescriptions), or reinforcement learning (e. g. maximizing evaluation signal which indicates cumulative reward from survival rates).
no code implementations • EMNLP 2017 • Chengyu Wang, Yan Fan, Xiaofeng He, Aoying Zhou
User generated categories (UGCs) are short texts that reflect how people describe and organize entities, expressing rich semantic relations implicitly.
no code implementations • EMNLP 2017 • Chengyu Wang, Xiaofeng He, Aoying Zhou
A taxonomy is a semantic hierarchy, consisting of concepts linked by is-a relations.
no code implementations • ACL 2017 • Chengyu Wang, Junchi Yan, Aoying Zhou, Xiaofeng He
Finding the correct hypernyms for entities is essential for taxonomy learning, fine-grained entity categorization, query understanding, etc.
no code implementations • COLING 2016 • Chengyu Wang, Xiaofeng He
Hypernym-hyponym ({``}is-a{''}) relations are key components in taxonomies, object hierarchies and knowledge graphs.
1 code implementation • Proceedings 2001 IEEE International Conference on Data Mining 2002 • Chris H.Q. Ding, Xiaofeng He, Hongyuan Zhab, Ming Gu, Horst D. Simon
In this paper, we propose a new algorithm for graph partitioning with an objective function that follows the min-max clustering principle.