Search Results for author: Zhangyin Feng

Found 10 papers, 4 papers with code

One Model, Multiple Modalities: A Sparsely Activated Approach for Text, Sound, Image, Video and Code

1 code implementation12 May 2022 Yong Dai, Duyu Tang, Liangxin Liu, Minghuan Tan, Cong Zhou, Jingquan Wang, Zhangyin Feng, Fan Zhang, Xueyu Hu, Shuming Shi

Moreover, our model supports self-supervised pretraining with the same sparsely activated way, resulting in better initialized parameters for different modalities.

Image Retrieval Text-to-Image Retrieval

Pretraining Chinese BERT for Detecting Word Insertion and Deletion Errors

no code implementations26 Apr 2022 Cong Zhou, Yong Dai, Duyu Tang, Enbo Zhao, Zhangyin Feng, Li Kuang, Shuming Shi

We achieve this by introducing a special token \texttt{[null]}, the prediction of which stands for the non-existence of a word.

Language Modelling Masked Language Modeling

MarkBERT: Marking Word Boundaries Improves Chinese BERT

no code implementations12 Mar 2022 Linyang Li, Yong Dai, Duyu Tang, Zhangyin Feng, Cong Zhou, Xipeng Qiu, Zenglin Xu, Shuming Shi

MarkBERT pushes the state-of-the-art of Chinese named entity recognition from 95. 4\% to 96. 5\% on the MSRA dataset and from 82. 8\% to 84. 2\% on the OntoNotes dataset, respectively.

Chinese Named Entity Recognition named-entity-recognition +5

GraphCodeBERT: Pre-training Code Representations with Data Flow

1 code implementation ICLR 2021 Daya Guo, Shuo Ren, Shuai Lu, Zhangyin Feng, Duyu Tang, Shujie Liu, Long Zhou, Nan Duan, Alexey Svyatkovskiy, Shengyu Fu, Michele Tufano, Shao Kun Deng, Colin Clement, Dawn Drain, Neel Sundaresan, Jian Yin, Daxin Jiang, Ming Zhou

Instead of taking syntactic-level structure of code like abstract syntax tree (AST), we use data flow in the pre-training stage, which is a semantic-level structure of code that encodes the relation of "where-the-value-comes-from" between variables.

Clone Detection Code Completion +7

Cannot find the paper you are looking for? You can Submit a new open access paper.