no code implementations • 23 Sep 2024 • Anran Li, YuanYuan Chen, Chao Ren, Wenhan Wang, Ming Hu, Tianlin Li, Han Yu, Qingyu Chen
For privacy-preserving graph learning tasks involving distributed graph datasets, federated learning (FL)-based GCN (FedGCN) training is required.
no code implementations • 3 Sep 2024 • Qingxuan Lv, Junyu Dong, Yuezun Li, Sheng Chen, Hui Yu, Shu Zhang, Wenhan Wang
To enable further advance in underwater stereo matching, we introduce a large synthetic dataset called UWStereo.
1 code implementation • 20 Mar 2024 • Yanzhou Li, Tianlin Li, Kangjie Chen, Jian Zhang, Shangqing Liu, Wenhan Wang, Tianwei Zhang, Yang Liu
It boasts superiority over existing backdoor injection techniques in several areas: (1) Practicality: BadEdit necessitates only a minimal dataset for injection (15 samples).
no code implementations • 22 Dec 2023 • Ze Yu Zhao, Zheng Zhu, Guilin Li, Wenhan Wang, Bo wang
In this work, we introduce an innovative autoregressive model leveraging Generative Pretrained Transformer (GPT) architectures, tailored for fraud detection in payment systems.
no code implementations • 20 May 2023 • Wei Ma, Shangqing Liu, ZhiHao Lin, Wenhan Wang, Qiang Hu, Ye Liu, Cen Zhang, Liming Nie, Li Li, Yang Liu
We break down the abilities needed for artificial intelligence~(AI) models to address SE tasks related to code analysis into three categories: 1) syntax understanding, 2) static behavior understanding, and 3) dynamic behavior understanding.
no code implementations • 20 Dec 2022 • Wei Ma, Shangqing Liu, Mengjie Zhao, Xiaofei Xie, Wenhan Wang, Qiang Hu, Jie Zhang, Yang Liu
These structures are fundamental to understanding code.
1 code implementation • 18 Aug 2022 • Wenhan Wang, Kechi Zhang, Ge Li, Shangqing Liu, Anran Li, Zhi Jin, Yang Liu
Learning vector representations for programs is a critical step in applying deep learning techniques for program understanding tasks.
1 code implementation • NeurIPS 2021 • Han Peng, Ge Li, Wenhan Wang, YunFei Zhao, Zhi Jin
Learning distributed representation of source code requires modelling its syntax and semantics.
no code implementations • 8 Dec 2020 • Kechi Zhang, Wenhan Wang, Huangzhao Zhang, Ge Li, Zhi Jin
To address the information of node and edge types, we bring the idea of heterogeneous graphs to learning on source code and present a new formula of building heterogeneous program graphs from ASTs with additional type information for nodes and edges.
no code implementations • 18 Sep 2020 • Wenhan Wang, Sijie Shen, Ge Li, Zhi Jin
In this paper, we take a further step and discuss the probability of directly completing a whole line of code instead of a single token.
1 code implementation • 20 Feb 2020 • Wenhan Wang, Ge Li, Bo Ma, Xin Xia, Zhi Jin
As far as we have concerned, we are the first to apply graph neural networks on the domain of code clone detection.
no code implementations • 25 Sep 2019 • Minjia Zhang, Wenhan Wang, Yuxiong He
This paper studies similarity search, which is a crucial enabler of many feature vector--based applications.
no code implementations • NeurIPS 2018 • Minjia Zhang, Xiaodong Liu, Wenhan Wang, Jianfeng Gao, Yuxiong He
Neural language models (NLMs) have recently gained a renewed interest by achieving state-of-the-art performance across many natural language processing (NLP) tasks.
no code implementations • ICLR 2018 • Wei Wen, Yuxiong He, Samyam Rajbhandari, Minjia Zhang, Wenhan Wang, Fang Liu, Bin Hu, Yiran Chen, Hai Li
This work aims to learn structurally-sparse Long Short-Term Memory (LSTM) by reducing the sizes of basic structures within LSTM units, including input updates, gates, hidden states, cell states and outputs.