Search Results for author: Taolin Zhang

Found 8 papers, 7 papers with code

Vision-Language Pre-training with Object Contrastive Learning for 3D Scene Understanding

no code implementations18 May 2023 Taolin Zhang, Sunan He, Dai Tao, Bin Chen, Zhi Wang, Shu-Tao Xia

In recent years, vision language pre-training frameworks have made significant progress in natural language processing and computer vision, achieving remarkable performance improvement on various downstream tasks.

Contrastive Learning Scene Understanding +1

Revisiting and Advancing Chinese Natural Language Understanding with Accelerated Heterogeneous Knowledge Pre-training

1 code implementation11 Oct 2022 Taolin Zhang, Junwei DOng, Jianing Wang, Chengyu Wang, Ang Wang, Yinghui Liu, Jun Huang, Yong Li, Xiaofeng He

Recently, knowledge-enhanced pre-trained language models (KEPLMs) improve context-aware representations via learning from structured relations in knowledge graphs, and/or linguistic knowledge from syntactic or dependency analysis.

Knowledge Graphs Language Modelling +2

FedEgo: Privacy-preserving Personalized Federated Graph Learning with Ego-graphs

1 code implementation29 Aug 2022 Taolin Zhang, Chuan Chen, Yaomin Chang, Lin Shu, Zibin Zheng

As special information carriers containing both structure and feature information, graphs are widely used in graph mining, e. g., Graph Neural Networks (GNNs).

Federated Learning Graph Learning +2

HiCLRE: A Hierarchical Contrastive Learning Framework for Distantly Supervised Relation Extraction

1 code implementation Findings (ACL) 2022 Dongyang Li, Taolin Zhang, Nan Hu, Chengyu Wang, Xiaofeng He

In this paper, we propose a hierarchical contrastive learning Framework for Distantly Supervised relation extraction (HiCLRE) to reduce noisy sentences, which integrate the global structural information and local fine-grained interaction.

Contrastive Learning Data Augmentation +1

DKPLM: Decomposable Knowledge-enhanced Pre-trained Language Model for Natural Language Understanding

1 code implementation2 Dec 2021 Taolin Zhang, Chengyu Wang, Nan Hu, Minghui Qiu, Chengguang Tang, Xiaofeng He, Jun Huang

Knowledge-Enhanced Pre-trained Language Models (KEPLMs) are pre-trained models with relation triples injecting from knowledge graphs to improve language understanding abilities.

Knowledge Graphs Knowledge Probing +3

SMedBERT: A Knowledge-Enhanced Pre-trained Language Model with Structured Semantics for Medical Text Mining

2 code implementations ACL 2021 Taolin Zhang, Zerui Cai, Chengyu Wang, Minghui Qiu, Bite Yang, Xiaofeng He

Recently, the performance of Pre-trained Language Models (PLMs) has been significantly improved by injecting knowledge facts to enhance their abilities of language understanding.

Language Modelling Natural Language Inference +1

Knowledge-Empowered Representation Learning for Chinese Medical Reading Comprehension: Task, Model and Resources

1 code implementation Findings (ACL) 2021 Taolin Zhang, Chengyu Wang, Minghui Qiu, Bite Yang, Xiaofeng He, Jun Huang

In this paper, we introduce a multi-target MRC task for the medical domain, whose goal is to predict answers to medical questions and the corresponding support sentences from medical information sources simultaneously, in order to ensure the high reliability of medical knowledge serving.

Machine Reading Comprehension Multi-Task Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.