no code implementations • 22 Oct 2023 • Zuoli Tang, ZhaoXin Huan, Zihao Li, Xiaolu Zhang, Jun Hu, Chilin Fu, Jun Zhou, Chenliang Li
We expect that by mixing the user's behaviors across different domains, we can exploit the common knowledge encoded in the pre-trained language model to alleviate the problems of data sparsity and cold start problems.
no code implementations • 9 Oct 2023 • Chan Wu, Hanxiao Zhang, Lin Ju, Jinjing Huang, Youshao Xiao, ZhaoXin Huan, Siyuan Li, Fanzhuang Meng, Lei Liang, Xiaolu Zhang, Jun Zhou
In this paper, we rethink the impact of memory consumption and communication costs on the training speed of large language models, and propose a memory-communication balanced strategy set Partial Redundancy Optimizer (PaRO).
no code implementations • 31 Aug 2023 • ZhaoXin Huan, Ke Ding, Ang Li, Xiaolu Zhang, Xu Min, Yong He, Liang Zhang, Jun Zhou, Linjian Mo, Jinjie Gu, Zhongyi Liu, Wenliang Zhong, Guannan Zhang
3) AntM$^{2}$C provides 1 billion CTR data with 200 features, including 200 million users and 6 million items.
1 code implementation • 27 Mar 2023 • Kaituo Feng, Changsheng Li, Xiaolu Zhang, Jun Zhou
This will bring two big challenges to the existing dynamic GNN methods: (i) How to dynamically propagate appropriate information in an open temporal graph, where new class nodes are often linked to old class nodes.
no code implementations • 28 May 2022 • Shih-Han Chan, Yinpeng Dong, Jun Zhu, Xiaolu Zhang, Jun Zhou
We propose four kinds of backdoor attacks for object detection task: 1) Object Generation Attack: a trigger can falsely generate an object of the target class; 2) Regional Misclassification Attack: a trigger can change the prediction of a surrounding object to the target class; 3) Global Misclassification Attack: a single trigger can change the predictions of all objects in an image to the target class; and 4) Object Disappearance Attack: a trigger can make the detector fail to detect the object of the target class.
no code implementations • 29 Sep 2021 • Yang Li, Yichuan Mo, Liangliang Shi, Junchi Yan, Xiaolu Zhang, Jun Zhou
Although many efforts have been made in terms of backbone architecture design, loss function, and training techniques, few results have been obtained on how the sampling in latent space can affect the final performance, and existing works on latent space mainly focus on controllability.
no code implementations • CVPR 2021 • Zihao Xiao, Xianfeng Gao, Chilin Fu, Yinpeng Dong, Wei Gao, Xiaolu Zhang, Jun Zhou, Jun Zhu
However, deep CNNs are vulnerable to adversarial patches, which are physically realizable and stealthy, raising new security concerns on the real-world applications of these models.
1 code implementation • 10 Jun 2021 • Jiawei Zhang, Linyi Li, Huichen Li, Xiaolu Zhang, Shuang Yang, Bo Li
In this paper, we show that such efficiency highly depends on the scale at which the attack is applied, and attacking at the optimal scale significantly improves the efficiency.
1 code implementation • 25 Feb 2021 • Huichen Li, Linyi Li, Xiaojun Xu, Xiaolu Zhang, Shuang Yang, Bo Li
We aim to bridge the gap between the two by investigating how to efficiently estimate gradient based on a projected low-dimensional space.
no code implementations • CVPR 2020 • Huichen Li, Xiaojun Xu, Xiaolu Zhang, Shuang Yang, Bo Li
Such adversarial attacks can be achieved by adding a small magnitude of perturbation to the input to mislead model prediction.
no code implementations • 3 Mar 2020 • ZhaoXin Huan, Yulong Wang, Xiaolu Zhang, Lin Shang, Chilin Fu, Jun Zhou
Adversarial examples often exhibit black-box attacking transferability, which allows that adversarial examples crafted for one model can fool another model.
no code implementations • 5 Oct 2019 • Bingzhe Wu, Chaochao Chen, Shiwan Zhao, Cen Chen, Yuan YAO, Guangyu Sun, Li Wang, Xiaolu Zhang, Jun Zhou
Based on this framework, we demonstrate that SGLD can prevent the information leakage of the training dataset to a certain extent.
1 code implementation • 27 Sep 2019 • Yulong Wang, Xiaolu Zhang, Lingxi Xie, Jun Zhou, Hang Su, Bo Zhang, Xiaolin Hu
Network pruning is an important research field aiming at reducing computational costs of neural networks.
no code implementations • NeurIPS 2019 • Bingzhe Wu, Shiwan Zhao, Chaochao Chen, Haoyang Xu, Li Wang, Xiaolu Zhang, Guangyu Sun, Jun Zhou
In this paper, we aim to understand the generalization properties of generative adversarial networks (GANs) from a new perspective of privacy protection.
no code implementations • CVPR 2019 • Bingzhe Wu, Shiwan Zhao, Guangyu Sun, Xiaolu Zhang, Zhong Su, Caihong Zeng, Zhihong Liu
(2) privacy leakage: the model trained using a conventional method may involuntarily reveal the private information of the patients in the training dataset.
no code implementations • 7 Sep 2018 • Xiaolu Zhang, Shiwan Zhao, Lingxi Xie
This paper considers WCE-based gastric ulcer detection, in which the major challenge is to detect the lesions in a local region.
no code implementations • 30 Jun 2018 • Bingzhe Wu, Xiaolu Zhang, Shiwan Zhao, Lingxi Xie, Caihong Zeng, Zhihong Liu, Guangyu Sun
Given an input image from a specified stain, several generators are first applied to estimate its appearances in other staining methods, and a classifier follows to combine visual cues from different stains for prediction (whether it is pathological, or which type of pathology it has).