Search Results for author: Xuyun Zhang

Found 11 papers, 7 papers with code

FedLPS: Heterogeneous Federated Learning for Multiple Tasks with Local Parameter Sharing

1 code implementation13 Feb 2024 Yongzhe Jia, Xuyun Zhang, Amin Beheshti, Wanchun Dou

FedLPS leverages principles from transfer learning to facilitate the deployment of multiple tasks on a single device by dividing the local model into a shareable encoder and task-specific encoders.

Edge-computing Federated Learning +1

Adaptive Hypergraph Network for Trust Prediction

1 code implementation7 Feb 2024 Rongwei Xu, Guanfeng Liu, Yan Wang, Xuyun Zhang, Kai Zheng, Xiaofang Zhou

In this paper, we propose an Adaptive Hypergraph Network for Trust Prediction (AHNTP), a novel approach that improves trust prediction accuracy by using higher-order correlations.

Contrastive Learning Decision Making

CETN: Contrast-enhanced Through Network for CTR Prediction

1 code implementation15 Dec 2023 Honghao Li, Lei Sang, Yi Zhang, Xuyun Zhang, Yiwen Zhang

Click-through rate (CTR) Prediction is a crucial task in personalized information retrievals, such as industrial recommender systems, online advertising, and web search.

Click-Through Rate Prediction Contrastive Learning +1

OptIForest: Optimal Isolation Forest for Anomaly Detection

1 code implementation22 Jun 2023 Haolong Xiang, Xuyun Zhang, Hongsheng Hu, Lianyong Qi, Wanchun Dou, Mark Dras, Amin Beheshti, Xiaolong Xu

Extensive experiments on a series of benchmarking datasets for comparative and ablation studies demonstrate that our approach can efficiently and robustly achieve better detection performance in general than the state-of-the-arts including the deep learning based methods.

Anomaly Detection Benchmarking +1

RePreM: Representation Pre-training with Masked Model for Reinforcement Learning

no code implementations3 Mar 2023 Yuanying Cai, Chuheng Zhang, Wei Shen, Xuyun Zhang, Wenjie Ruan, Longbo Huang

Inspired by the recent success of sequence modeling in RL and the use of masked language model for pre-training, we propose a masked model for pre-training in RL, RePreM (Representation Pre-training with Masked Model), which trains the encoder combined with transformer blocks to predict the masked states or actions in a trajectory.

Data Augmentation Language Modelling +3

TD3 with Reverse KL Regularizer for Offline Reinforcement Learning from Mixed Datasets

1 code implementation5 Dec 2022 Yuanying Cai, Chuheng Zhang, Li Zhao, Wei Shen, Xuyun Zhang, Lei Song, Jiang Bian, Tao Qin, TieYan Liu

There are two challenges for this setting: 1) The optimal trade-off between optimizing the RL signal and the behavior cloning (BC) signal changes on different states due to the variation of the action coverage induced by different behavior policies.

D4RL Offline RL +2

A Transformer-Based User Satisfaction Prediction for Proactive Interaction Mechanism in DuerOS

no code implementations5 Dec 2022 Wei Shen, Xiaonan He, Chuheng Zhang, Xuyun Zhang, Jian Xie

Moreover, they are trained and evaluated on the benchmark datasets with adequate labels, which are expensive to obtain in a commercial dialogue system.

Spoken Dialogue Systems

Source Inference Attacks in Federated Learning

1 code implementation13 Sep 2021 Hongsheng Hu, Zoran Salcic, Lichao Sun, Gillian Dobbie, Xuyun Zhang

However, existing MIAs ignore the source of a training member, i. e., the information of which client owns the training member, while it is essential to explore source privacy in FL beyond membership privacy of examples from all clients.

Federated Learning Inference Attack

Diversity-aware Web APIs Recommendation with Compatibility Guarantee

no code implementations10 Aug 2021 Wenwen Gonga, Yulan Zhang, Xuyun Zhang, Yucong Duan, Yawei Wang, Yifei Chena, Lianyong Qi

Afterwards, with the diverse correlation subgraphs, we model the compatible web APIs recommendation problem to be a minimum group Steiner tree search problem.

Membership Inference Attacks on Machine Learning: A Survey

2 code implementations14 Mar 2021 Hongsheng Hu, Zoran Salcic, Lichao Sun, Gillian Dobbie, Philip S. Yu, Xuyun Zhang

In recent years, MIAs have been shown to be effective on various ML models, e. g., classification models and generative models.

BIG-bench Machine Learning Fairness +4

Cannot find the paper you are looking for? You can Submit a new open access paper.