Search Results for author: Xiaofeng He

Found 21 papers, 8 papers with code

GeoGLUE: A GeoGraphic Language Understanding Evaluation Benchmark

no code implementations11 May 2023 Dongyang Li, Ruixue Ding, Qiang Zhang, Zheng Li, Boli Chen, Pengjun Xie, Yao Xu, Xin Li, Ning Guo, Fei Huang, Xiaofeng He

With a fast developing pace of geographic applications, automatable and intelligent models are essential to be designed to handle the large volume of information.

Entity Alignment Natural Language Understanding

Self-supervised Egomotion and Depth Learning via Bi-directional Coarse-to-Fine Scale Recovery

no code implementations16 Nov 2022 Hao Qu, Lilian Zhang, Xiaoping Hu, Xiaofeng He, Xianfei Pan, Changhao Chen

The scale-ambiguity problem is solved by introducing a novel two-stages coarse-to-fine scale recovery strategy that jointly refines coarse poses and depths.

Autonomous Driving Self-Learning +1

Revisiting and Advancing Chinese Natural Language Understanding with Accelerated Heterogeneous Knowledge Pre-training

1 code implementation11 Oct 2022 Taolin Zhang, Junwei DOng, Jianing Wang, Chengyu Wang, Ang Wang, Yinghui Liu, Jun Huang, Yong Li, Xiaofeng He

Recently, knowledge-enhanced pre-trained language models (KEPLMs) improve context-aware representations via learning from structured relations in knowledge graphs, and/or linguistic knowledge from syntactic or dependency analysis.

Knowledge Graphs Language Modelling +2

HiCLRE: A Hierarchical Contrastive Learning Framework for Distantly Supervised Relation Extraction

1 code implementation Findings (ACL) 2022 Dongyang Li, Taolin Zhang, Nan Hu, Chengyu Wang, Xiaofeng He

In this paper, we propose a hierarchical contrastive learning Framework for Distantly Supervised relation extraction (HiCLRE) to reduce noisy sentences, which integrate the global structural information and local fine-grained interaction.

Contrastive Learning Data Augmentation +1

DKPLM: Decomposable Knowledge-enhanced Pre-trained Language Model for Natural Language Understanding

1 code implementation2 Dec 2021 Taolin Zhang, Chengyu Wang, Nan Hu, Minghui Qiu, Chengguang Tang, Xiaofeng He, Jun Huang

Knowledge-Enhanced Pre-trained Language Models (KEPLMs) are pre-trained models with relation triples injecting from knowledge graphs to improve language understanding abilities.

Knowledge Graphs Knowledge Probing +3

SMedBERT: A Knowledge-Enhanced Pre-trained Language Model with Structured Semantics for Medical Text Mining

2 code implementations ACL 2021 Taolin Zhang, Zerui Cai, Chengyu Wang, Minghui Qiu, Bite Yang, Xiaofeng He

Recently, the performance of Pre-trained Language Models (PLMs) has been significantly improved by injecting knowledge facts to enhance their abilities of language understanding.

Language Modelling Natural Language Inference +1

TCL: Transformer-based Dynamic Graph Modelling via Contrastive Learning

1 code implementation17 May 2021 Lu Wang, xiaofu Chang, Shuang Li, Yunfei Chu, Hui Li, Wei zhang, Xiaofeng He, Le Song, Jingren Zhou, Hongxia Yang

Secondly, on top of the proposed graph transformer, we introduce a two-stream encoder that separately extracts representations from temporal neighborhoods associated with the two interaction nodes and then utilizes a co-attentional transformer to model inter-dependencies at a semantic level.

Contrastive Learning Graph Learning +2

Knowledge-Empowered Representation Learning for Chinese Medical Reading Comprehension: Task, Model and Resources

1 code implementation Findings (ACL) 2021 Taolin Zhang, Chengyu Wang, Minghui Qiu, Bite Yang, Xiaofeng He, Jun Huang

In this paper, we introduce a multi-target MRC task for the medical domain, whose goal is to predict answers to medical questions and the corresponding support sentences from medical information sources simultaneously, in order to ensure the high reliability of medical knowledge serving.

Machine Reading Comprehension Multi-Task Learning +1

Meta Fine-Tuning Neural Language Models for Multi-Domain Text Mining

2 code implementations EMNLP 2020 Chengyu Wang, Minghui Qiu, Jun Huang, Xiaofeng He

In this paper, we propose an effective learning procedure named Meta Fine-Tuning (MFT), served as a meta-learner to solve a group of similar NLP tasks for neural language models.

Few-Shot Learning Language Modelling

KEML: A Knowledge-Enriched Meta-Learning Framework for Lexical Relation Classification

no code implementations25 Feb 2020 Chengyu Wang, Minghui Qiu, Jun Huang, Xiaofeng He

We further combine a meta-learning process over the auxiliary task distribution and supervised learning to train the neural lexical relation classifier.

General Classification Meta-Learning +1

HMRL: Hyper-Meta Learning for Sparse Reward Reinforcement Learning Problem

no code implementations11 Feb 2020 Yun Hua, Xiangfeng Wang, Bo Jin, Wenhao Li, Junchi Yan, Xiaofeng He, Hongyuan Zha

In spite of the success of existing meta reinforcement learning methods, they still have difficulty in learning a meta policy effectively for RL problems with sparse reward.

Meta-Learning Meta Reinforcement Learning +2

Learning Robust Representations with Graph Denoising Policy Network

no code implementations4 Oct 2019 Lu Wang, Wenchao Yu, Wei Wang, Wei Cheng, Wei zhang, Hongyuan Zha, Xiaofeng He, Haifeng Chen

Graph representation learning, aiming to learn low-dimensional representations which capture the geometric dependencies between nodes in the original graph, has gained increasing popularity in a variety of graph analysis tasks, including node classification and link prediction.

Denoising Graph Representation Learning +2

Supervised Reinforcement Learning with Recurrent Neural Network for Dynamic Treatment Recommendation

no code implementations4 Jul 2018 Lu Wang, Wei zhang, Xiaofeng He, Hongyuan Zha

Prior relevant studies recommend treatments either use supervised learning (e. g. matching the indicator signal which denotes doctor prescriptions), or reinforcement learning (e. g. maximizing evaluation signal which indicates cumulative reward from survival rates).

Recommendation Systems reinforcement-learning +1

Learning Fine-grained Relations from Chinese User Generated Categories

no code implementations EMNLP 2017 Chengyu Wang, Yan Fan, Xiaofeng He, Aoying Zhou

User generated categories (UGCs) are short texts that reflect how people describe and organize entities, expressing rich semantic relations implicitly.

Graph Mining Relation Extraction +1

Transductive Non-linear Learning for Chinese Hypernym Prediction

no code implementations ACL 2017 Chengyu Wang, Junchi Yan, Aoying Zhou, Xiaofeng He

Finding the correct hypernyms for entities is essential for taxonomy learning, fine-grained entity categorization, query understanding, etc.

Relation Extraction Transductive Learning

Chinese Hypernym-Hyponym Extraction from User Generated Categories

no code implementations COLING 2016 Chengyu Wang, Xiaofeng He

Hypernym-hyponym ({``}is-a{''}) relations are key components in taxonomies, object hierarchies and knowledge graphs.

Knowledge Graphs Machine Translation +4

Cannot find the paper you are looking for? You can Submit a new open access paper.