Search Results for author: Guangyi Lv

Found 6 papers, 0 papers with code

LadRa-Net: Locally-Aware Dynamic Re-read Attention Net for Sentence Semantic Matching

no code implementations6 Aug 2021 Kun Zhang, Guangyi Lv, Le Wu, Enhong Chen, Qi Liu, Meng Wang

In order to overcome this problem and boost the performance of attention mechanism, we propose a novel dynamic re-read attention, which can pay close attention to one small region of sentences at each step and re-read the important parts for better sentence representations.

Language Modelling Natural Language Inference +1

DGA-Net Dynamic Gaussian Attention Network for Sentence Semantic Matching

no code implementations9 Jun 2021 Kun Zhang, Guangyi Lv, Meng Wang, Enhong Chen

Then, we develop a Dynamic Gaussian Attention (DGA) to dynamically capture the important parts and corresponding local contexts from a detailed perspective.

Language Modelling Representation Learning

R$^2$-Net: Relation of Relation Learning Network for Sentence Semantic Matching

no code implementations16 Dec 2020 Kun Zhang, Le Wu, Guangyi Lv, Meng Wang, Enhong Chen, Shulan Ruan

Sentence semantic matching is one of the fundamental tasks in natural language processing, which requires an agent to determine the semantic relation among input sentences.

Natural Language Processing Relation Classification

Deep Technology Tracing for High-tech Companies

no code implementations2 Jan 2020 Han Wu, Kun Zhang, Guangyi Lv, Qi Liu, Runlong Yu, Weihao Zhao, Enhong Chen, Jianhui Ma

Technological change and innovation are vitally important, especially for high-tech companies.

Skeptical Deep Learning with Distribution Correction

no code implementations9 Nov 2018 Mingxiao An, Yongzhou Chen, Qi Liu, Chuanren Liu, Guangyi Lv, Fangzhao Wu, Jianhui Ma

Recently deep neural networks have been successfully used for various classification tasks, especially for problems with massive perfectly labeled training data.

Classification General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.