no code implementations • 11 Apr 2024 • Jiayi Wu, Renyu Zhu, Nuo Chen, Qiushi Sun, Xiang Li, Ming Gao
Over the past few years, we have witnessed remarkable advancements in Code Pre-trained Models (CodePTMs).
2 code implementations • 21 Mar 2024 • Qiushi Sun, Zhirui Chen, Fangzhi Xu, Kanzhi Cheng, Chang Ma, Zhangyue Yin, Jianing Wang, Chengcheng Han, Renyu Zhu, Shuai Yuan, Qipeng Guo, Xipeng Qiu, Pengcheng Yin, XiaoLi Li, Fei Yuan, Lingpeng Kong, Xiang Li, Zhiyong Wu
Building on our examination of the developmental trajectories, we further investigate the emerging synergies between code intelligence and broader machine intelligence, uncovering new cross-domain opportunities and illustrating the substantial influence of code intelligence across various domains.
1 code implementation • 10 Mar 2024 • Fei Wang, Haoyu Liu, Haoyang Bi, Xiangzhuang Shen, Renyu Zhu, Runze Wu, Minmin Lin, Tangjie Lv, Changjie Fan, Qi Liu, Zhenya Huang, Enhong Chen
In this paper, we introduce a substantial crowdsourcing annotation dataset collected from a real-world crowdsourcing platform.
no code implementations • 20 Feb 2024 • Yingfan Liu, Renyu Zhu, Ming Gao
With the rapid development of big data and AI technology, programming is in high demand and has become an essential skill for students.
no code implementations • 15 Nov 2023 • Haoyu Liu, Fei Wang, Minmin Lin, Runze Wu, Renyu Zhu, Shiwei Zhao, Kai Wang, Tangjie Lv, Changjie Fan
These annotators could leave substantial historical annotation records on the crowdsourcing platforms, which can benefit label aggregation, but are ignored by previous works.
1 code implementation • 5 Sep 2023 • Renyu Zhu, Chengcheng Han, Yong Qian, Qiushi Sun, Xiang Li, Ming Gao, Xuezhi Cao, Yunsen Xian
To solve these issues, in this paper, we propose a novel exchanging-based multimodal fusion model MuSE for text-vision fusion based on Transformer.
1 code implementation • 28 Jul 2023 • Renyu Zhu, Haoyu Liu, Runze Wu, Minmin Lin, Tangjie Lv, Changjie Fan, Haobo Wang
In this paper, we investigate the problem of learning with noisy labels in real-world annotation scenarios, where noise can be categorized into two types: factual noise and ambiguity noise.
no code implementations • 17 May 2023 • Chengcheng Han, Liqing Cui, Renyu Zhu, Jianing Wang, Nuo Chen, Qiushi Sun, Xiang Li, Ming Gao
In this paper, we introduce gradient descent into black-box tuning scenario through knowledge distillation.
1 code implementation • 14 May 2023 • Qiushi Sun, Chengcheng Han, Nuo Chen, Renyu Zhu, Jingyang Gong, Xiang Li, Ming Gao
Large language models (LLMs) have shown increasing power on various natural language processing (NLP) tasks.
1 code implementation • 14 Feb 2023 • Chengcheng Han, Renyu Zhu, Jun Kuang, FengJiao Chen, Xiang Li, Ming Gao, Xuezhi Cao, Wei Wu
We design an improved triplet network to map samples and prototype vectors into a low-dimensional space that is easier to be classified and propose an adaptive margin for each entity type.
1 code implementation • 7 Oct 2022 • Nuo Chen, Qiushi Sun, Renyu Zhu, Xiang Li, Xuesong Lu, Ming Gao
To interpret these models, some probing methods have been applied.
1 code implementation • 15 May 2022 • Xiang Li, Renyu Zhu, Yao Cheng, Caihua Shan, Siqiang Luo, Dongsheng Li, Weining Qian
Further, for other homophilous nodes excluded in the neighborhood, they are ignored for information aggregation.
1 code implementation • ACL 2022 • Renyu Zhu, Lei Yuan, Xiang Li, Ming Gao, Wenyuan Cai
In this paper, we consider human behaviors and propose the PGNN-EK model that consists of two main components.
no code implementations • 11 Dec 2021 • Renyu Zhu, Dongxiang Zhang, Chengcheng Han, Ming Gao, Xuesong Lu, Weining Qian, Aoying Zhou
More specifically, we construct a bipartite graph for programming problem embedding, and design an improved pre-training model PLCodeBERT for code embedding, as well as a double-sequence RNN model with exponential decay attention for effective feature fusion.
1 code implementation • 29 Nov 2020 • Na Li, Renyu Zhu, Xiaoxu Zhou, Xiangnan He, Wenyuan Cai, Ming Gao, Aoying Zhou
In this paper, we model the author disambiguation as a collaboration network reconstruction problem, and propose an incremental and unsupervised author disambiguation method, namely IUAD, which performs in a bottom-up manner.