no code implementations • 18 Oct 2022 • Han Xu, Xiaorui Liu, Yuxuan Wan, Jiliang Tang
We demonstrate that the fairly trained classifiers can be greatly vulnerable to such poisoning attacks, with much worse accuracy & fairness trade-off, even when we apply some of the most effective defenses (originally proposed to defend traditional classification tasks).
1 code implementation • 18 Oct 2022 • Jie Ren, Han Xu, Yuxuan Wan, Xingjun Ma, Lichao Sun, Jiliang Tang
The unlearnable strategies have been introduced to prevent third parties from training on the data without permission.
no code implementations • 17 Oct 2022 • Han Xu, Menghai Pan, Zhimeng Jiang, Huiyuan Chen, Xiaoting Li, Mahashweta Das, Hao Yang
The existence of adversarial attacks (or adversarial examples) brings huge concern about the machine learning (ML) model's safety issues.
no code implementations • 17 Oct 2022 • Pengfei He, Han Xu, Jie Ren, Yuxuan Wan, Zitao Liu, Jiliang Tang
To tackle this problem, we propose Probabilistic Categorical Adversarial Attack (PCAA), which transfers the discrete optimization problem to a continuous problem that can be solved efficiently by Projected Gradient Descent.
no code implementations • 21 Sep 2022 • Han Xu, Zheming Zuo, Jie Li, Victor Chang
Situating at the core of Artificial Intelligence (AI), Machine Learning (ML), and more specifically, Deep Learning (DL) have embraced great success in the past two decades.
no code implementations • 21 Sep 2022 • Wenqi Fan, Xiangyu Zhao, Xiao Chen, Jingran Su, Jingtong Gao, Lin Wang, Qidong Liu, Yiqi Wang, Han Xu, Lei Chen, Qing Li
As one of the most successful AI-powered applications, recommender systems aim to help people make appropriate decisions in an effective and efficient way, by providing personalized suggestions in many aspects of our lives, especially for various human-oriented online services such as e-commerce platforms and social media sites.
no code implementations • 26 Jun 2022 • Han Xu, Hao Qi, Kunyao Wang, Pei Wang, Guowei Zhang, Congcong Liu, Junsheng Jin, Xiwei Zhao, Changping Peng, Zhangang Lin, Jingping Shao
To our knowledge, we are the first to propose an end-to-end solution for online training and deployment on complex CTR models from the system framework side.
no code implementations • 1 Jun 2022 • Yuxuan Wan, Han Xu, Xiaorui Liu, Jie Ren, Wenqi Fan, Jiliang Tang
However, federated learning is still under the risk of privacy leakage because of the existence of attackers who deliberately conduct gradient leakage attacks to reconstruct the client data.
no code implementations • 2 May 2022 • Yaxin Li, Xiaorui Liu, Han Xu, Wentao Wang, Jiliang Tang
Deep Neural Network (DNN) are vulnerable to adversarial attacks.
1 code implementation • 21 Apr 2022 • Han Xu, Abhijit Sarkar, A. Lynn Abbott
A primary motivation of the work has been to achieve results that are consistent across the full range of skin tones, even while using a training dataset that is significantly biased toward lighter skin tones.
1 code implementation • 3 Apr 2022 • Lingyun Lu, Bang Wang, Zizhuo Zhang, Shenghao Liu, Han Xu
Recent studies regard items as entities of a knowledge graph and leverage graph neural networks to assist item encoding, yet by considering each relation type individually.
no code implementations • CVPR 2022 • Han Xu, Jiayi Ma, Jiteng Yuan, Zhuliang Le, Wei Liu
Specifically, for image registration, we solve the bottlenecks of defining registration metrics applicable for multi-modal images and facilitating the network convergence.
1 code implementation • NeurIPS 2021 • Xiaorui Liu, Jiayuan Ding, Wei Jin, Han Xu, Yao Ma, Zitao Liu, Jiliang Tang
Graph neural networks (GNNs) have shown the power in graph representation learning for numerous tasks.
no code implementations • 7 Aug 2021 • Wenqi Fan, Wei Jin, Xiaorui Liu, Han Xu, Xianfeng Tang, Suhang Wang, Qing Li, Jiliang Tang, JianPing Wang, Charu Aggarwal
Despite the great success, recent studies have shown that GNNs are highly vulnerable to adversarial attacks, where adversaries can mislead the GNNs' prediction by modifying graphs.
no code implementations • 28 Jul 2021 • Wentao Wang, Han Xu, Xiaorui Liu, Yaxin Li, Bhavani Thuraisingham, Jiliang Tang
Adversarial training has been empirically proven to be one of the most effective and reliable defense methods against adversarial attacks.
no code implementations • 9 Jun 2021 • Han Xu, Xiaorui Liu, Wentao Wang, Wenbiao Ding, Zhongqin Wu, Zitao Liu, Anil Jain, Jiliang Tang
In this work, we study the effect of memorization in adversarial trained DNNs and disclose two important findings: (a) Memorizing atypical samples is only effective to improve DNN's accuracy on clean atypical samples, but hardly improve their adversarial robustness and (b) Memorizing certain atypical samples will even hurt the DNN's performance on typical samples.
1 code implementation • 10 Jan 2021 • Zheming Zuo, Jie Li, Han Xu, Noura Al Moubayed
Disruptive technologies provides unparalleled opportunities to contribute to the identifications of many aspects in pervasive healthcare, from the adoption of the Internet of Things through to Machine Learning (ML) techniques.
no code implementations • 24 Dec 2020 • Han Xu, Lingna Wang, Haidong Yuan, Xin Wang
Here we study the generalizability of optimal control, namely, optimal controls that can be systematically updated across a range of parameters with minimal cost.
2 code implementations • 13 Oct 2020 • Han Xu, Xiaorui Liu, Yaxin Li, Anil K. Jain, Jiliang Tang
However, we find that adversarial training algorithms tend to introduce severe disparity of accuracy and robustness between different groups of data.
no code implementations • 2 Sep 2020 • Han Xu, Ya-Xin Li, Xiaorui Liu, Hui Liu, Jiliang Tang
Thus, in this paper, we perform the initial study about adversarial attacks on meta learning under the few-shot classification problem.
3 code implementations • 13 May 2020 • Ya-Xin Li, Wei Jin, Han Xu, Jiliang Tang
DeepRobust is a PyTorch adversarial learning library which aims to build a comprehensive and easy-to-use platform to foster this research field.
3 code implementations • 2 Mar 2020 • Wei Jin, Ya-Xin Li, Han Xu, Yiqi Wang, Shuiwang Ji, Charu Aggarwal, Jiliang Tang
As the extensions of DNNs to graphs, Graph Neural Networks (GNNs) have been demonstrated to inherit this vulnerability.
no code implementations • 29 Nov 2019 • Wenpeng Li, Yongli Sun, Jinjun Wang, Han Xu, Xiangru Yang, Long Cui
Jointly utilizing global and local features to improve model accuracy is becoming a popular approach for the person re-identification (ReID) problem, because previous works using global features alone have very limited capacity at extracting discriminative local patterns in the obtained feature representation.
4 code implementations • 17 Sep 2019 • Han Xu, Yao Ma, Haochen Liu, Debayan Deb, Hui Liu, Jiliang Tang, Anil K. Jain
In this survey, we review the state of the art algorithms for generating adversarial examples and the countermeasures against adversarial examples, for the three popular data types, i. e., images, graphs and text.
no code implementations • 3 May 2019 • Xiong Deng, Chao Chen, Deyang Chen, Xiangbin Cai, Xiaozhe Yin, Chao Xu, Fei Sun, Caiwen Li, Yan Li, Han Xu, Mao Ye, Guo Tian, Zhen Fan, Zhipeng Hou, Minghui Qin, Yu Chen, Zhenlin Luo, Xubing Lu, Guofu Zhou, Lang Chen, Ning Wang, Ye Zhu, Xingsen Gao, Jun-Ming Liu
The limitation of commercially available single-crystal substrates and the lack of continuous strain tunability preclude the ability to take full advantage of strain engineering for further exploring novel properties and exhaustively studying fundamental physics in complex oxides.
1 code implementation • 25 Apr 2019 • Han Xu, Junning Li, Liqiang Liu, Yu Wang, Haidong Yuan, Xin Wang
Measurement and estimation of parameters are essential for science and engineering, where one of the main quests is to find systematic schemes that can achieve high precision.
Quantum Physics Mesoscale and Nanoscale Physics
no code implementations • 17 May 2018 • Kevin He, Jian Kang, Hyokyoung Grace Hong, Ji Zhu, Yanming Li, Huazhen Lin, Han Xu, Yi Li
Modern bio-technologies have produced a vast amount of high-throughput data with the number of predictors far greater than the sample size.