Search Results for author: Wenhan Wang

Found 14 papers, 4 papers with code

Federated Graph Learning with Adaptive Importance-based Sampling

no code implementations23 Sep 2024 Anran Li, YuanYuan Chen, Chao Ren, Wenhan Wang, Ming Hu, Tianlin Li, Han Yu, Qingyu Chen

For privacy-preserving graph learning tasks involving distributed graph datasets, federated learning (FL)-based GCN (FedGCN) training is required.

Federated Learning Graph Sampling +1

UWStereo: A Large Synthetic Dataset for Underwater Stereo Matching

no code implementations3 Sep 2024 Qingxuan Lv, Junyu Dong, Yuezun Li, Sheng Chen, Hui Yu, Shu Zhang, Wenhan Wang

To enable further advance in underwater stereo matching, we introduce a large synthetic dataset called UWStereo.

Stereo Matching

BadEdit: Backdooring large language models by model editing

1 code implementation20 Mar 2024 Yanzhou Li, Tianlin Li, Kangjie Chen, Jian Zhang, Shangqing Liu, Wenhan Wang, Tianwei Zhang, Yang Liu

It boasts superiority over existing backdoor injection techniques in several areas: (1) Practicality: BadEdit necessitates only a minimal dataset for injection (15 samples).

Backdoor Attack knowledge editing

Generative Pretraining at Scale: Transformer-Based Encoding of Transactional Behavior for Fraud Detection

no code implementations22 Dec 2023 Ze Yu Zhao, Zheng Zhu, Guilin Li, Wenhan Wang, Bo wang

In this work, we introduce an innovative autoregressive model leveraging Generative Pretrained Transformer (GPT) architectures, tailored for fraud detection in payment systems.

Anomaly Detection Fraud Detection

LMs: Understanding Code Syntax and Semantics for Code Analysis

no code implementations20 May 2023 Wei Ma, Shangqing Liu, ZhiHao Lin, Wenhan Wang, Qiang Hu, Ye Liu, Cen Zhang, Liming Nie, Li Li, Yang Liu

We break down the abilities needed for artificial intelligence~(AI) models to address SE tasks related to code analysis into three categories: 1) syntax understanding, 2) static behavior understanding, and 3) dynamic behavior understanding.

Learning Program Representations with a Tree-Structured Transformer

1 code implementation18 Aug 2022 Wenhan Wang, Kechi Zhang, Ge Li, Shangqing Liu, Anran Li, Zhi Jin, Yang Liu

Learning vector representations for programs is a critical step in applying deep learning techniques for program understanding tasks.

Representation Learning

Learning to Represent Programs with Heterogeneous Graphs

no code implementations8 Dec 2020 Kechi Zhang, Wenhan Wang, Huangzhao Zhang, Ge Li, Zhi Jin

To address the information of node and edge types, we bring the idea of heterogeneous graphs to learning on source code and present a new formula of building heterogeneous program graphs from ASTs with additional type information for nodes and edges.

Code Comment Generation Comment Generation

Towards Full-line Code Completion with Neural Language Models

no code implementations18 Sep 2020 Wenhan Wang, Sijie Shen, Ge Li, Zhi Jin

In this paper, we take a further step and discuss the probability of directly completing a whole line of code instead of a single token.

Code Completion

Detecting Code Clones with Graph Neural Networkand Flow-Augmented Abstract Syntax Tree

1 code implementation20 Feb 2020 Wenhan Wang, Ge Li, Bo Ma, Xin Xia, Zhi Jin

As far as we have concerned, we are the first to apply graph neural networks on the domain of code clone detection.

Clone Detection

Learning to Anneal and Prune Proximity Graphs for Similarity Search

no code implementations25 Sep 2019 Minjia Zhang, Wenhan Wang, Yuxiong He

This paper studies similarity search, which is a crucial enabler of many feature vector--based applications.

Stochastic Optimization

Navigating with Graph Representations for Fast and Scalable Decoding of Neural Language Models

no code implementations NeurIPS 2018 Minjia Zhang, Xiaodong Liu, Wenhan Wang, Jianfeng Gao, Yuxiong He

Neural language models (NLMs) have recently gained a renewed interest by achieving state-of-the-art performance across many natural language processing (NLP) tasks.

Decoder Language Modelling +2

Learning Intrinsic Sparse Structures within Long Short-Term Memory

no code implementations ICLR 2018 Wei Wen, Yuxiong He, Samyam Rajbhandari, Minjia Zhang, Wenhan Wang, Fang Liu, Bin Hu, Yiran Chen, Hai Li

This work aims to learn structurally-sparse Long Short-Term Memory (LSTM) by reducing the sizes of basic structures within LSTM units, including input updates, gates, hidden states, cell states and outputs.

Language Modelling Model Compression +1

Cannot find the paper you are looking for? You can Submit a new open access paper.