Search Results for author: Yanbin Wang

Found 5 papers, 2 papers with code

Ethereum Fraud Detection via Joint Transaction Language Model and Graph Representation Learning

no code implementations9 Sep 2024 Yifan Jia, Yanbin Wang, Jianguo Sun, Yiwei Liu, Zhang Sheng, Ye Tian

To address these challenges, we propose TLMG4Eth that combines a transaction language model with graph-based methods to capture semantic, similarity, and structural features of transaction data in Ethereum.

Attribute Fraud Detection +4

The Role of Transformer Models in Advancing Blockchain Technology: A Systematic Survey

no code implementations2 Sep 2024 Tianxu Liu, Yanbin Wang, Jianguo Sun, Ye Tian, Yanyu Huang, Tao Xue, Peiyue Li, Yiwei Liu

As blockchain technology rapidly evolves, the demand for enhanced efficiency, security, and scalability grows. Transformer models, as powerful deep learning architectures, have shown unprecedented potential in addressing various blockchain challenges.

Anomaly Detection

Edge Computing for IoT: Novel Insights from a Comparative Analysis of Access Control Models

no code implementations13 May 2024 Tao Xue, Ying Zhang, Yanbin Wang, Wenbo Wang, Shuailou Li, Haibin Zhang

IoT edge computing positions computing resources closer to the data sources to reduce the latency, relieve the bandwidth pressure on the cloud, and enhance data security.

Autonomous Vehicles Cloud Computing +1

Struggle with Adversarial Defense? Try Diffusion

1 code implementation12 Apr 2024 Yujie Li, Yanbin Wang, Haitao Xu, Bin Liu, Jianguo Sun, Zhenhao Guo, Wenrui Ma

Unlike data-driven classifiers, TMDC, guided by Bayesian principles, utilizes the conditional likelihood from diffusion models to determine the class probabilities of input images, thereby insulating against the influences of data shift and the limitations of adversarial training.

Adversarial Defense Adversarial Robustness

URLBERT:A Contrastive and Adversarial Pre-trained Model for URL Classification

1 code implementation18 Feb 2024 Yujie Li, Yanbin Wang, Haitao Xu, Zhenhao Guo, Zheng Cao, Lun Zhang

To address this gap, this paper introduces URLBERT, the first pre-trained representation learning model applied to a variety of URL classification or detection tasks.

Contrastive Learning Multi-Task Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.