no code implementations • EMNLP 2021 • Hengtong Zhang, Tianhang Zheng, Yaliang Li, Jing Gao, Lu Su, Bo Li
To address this problem, we propose a training framework with certified robustness to eliminate the causes that trigger the generation of profanity.
no code implementations • 26 Feb 2024 • Tianhang Zheng, Baochun Li
Federated learning has recently emerged as a decentralized approach to learn a high-performance model without access to user data.
1 code implementation • 3 Feb 2024 • Mengnan Zhao, Lihe Zhang, Tianhang Zheng, Yuqiu Kong, BaoCai Yin
Large-scale diffusion models, known for their impressive image generation capabilities, have raised concerns among researchers regarding social impacts, such as the imitation of copyrighted artistic styles.
no code implementations • 29 Nov 2023 • Jia Li, Lijie Hu, Jingfeng Zhang, Tianhang Zheng, Hua Zhang, Di Wang
In this paper, we address the limitations of existing text-to-image diffusion models in generating demographically fair results when given human-related descriptions.
1 code implementation • 20 Oct 2023 • Xinyu Zhang, Qingyu Liu, Zhongjie Ba, Yuan Hong, Tianhang Zheng, Feng Lin, Li Lu, Kui Ren
In this paper, we first conduct a comprehensive study on prior FL attacks and detection methods.
no code implementations • 29 Sep 2021 • Tianhang Zheng, Baochun Li
In this paper, we show that PGD-2 AT with random initialization (PGD-2-RS AT) and attack step size $\alpha=1. 25\epsilon/2$ only needs approximately a half computational cost of FGSM + GradAlign AT and actually can avoid catastrophic overfitting for large $\ell_\infty$ perturbations.
no code implementations • 15 May 2020 • Tianhang Zheng, Di Wang, Baochun Li, Jinhui Xu
Based on our framework, we assess the Gaussian and Exponential mechanisms by comparing the magnitude of additive noise required by these mechanisms and the lower bounds (criteria).
no code implementations • 14 May 2020 • Tianhang Zheng, Sheng Liu, Changyou Chen, Junsong Yuan, Baochun Li, Kui Ren
We first formulate generation of adversarial skeleton actions as a constrained optimization problem by representing or approximating the physiological and physical constraints with mathematical formulations.
no code implementations • Network and Distributed Systems Security (NDSS) Symposium 2020 • Zhongjie Ba, Tianhang Zheng, Xinyu Zhang, Zhan Qin, Baochun Li, Xue Liu and Kui Ren
The second limitation comes from a common sense that these sensors can only pick up a narrow band (85-100Hz) of speech signals due to a sampling ceiling of 200Hz.
no code implementations • 25 Sep 2019 • Tianhang Zheng, Di Wang, Baochun Li, Jinhui Xu
We answer the above two questions by first demonstrating that Gaussian mechanism and Exponential mechanism are the (near) optimal options to certify the $\ell_2$ and $\ell_\infty$-normed robustness.
no code implementations • 26 Apr 2019 • Hengtong Zhang, Tianhang Zheng, Jing Gao, Chenglin Miao, Lu Su, Yaliang Li, Kui Ren
Knowledge graph embedding (KGE) is a technique for learning continuous embeddings for entities and relations in the knowledge graph. Due to its benefit to a variety of downstream tasks such as knowledge graph completion, question answering and recommendation, KGE has gained significant attention recently.
3 code implementations • ICCV 2019 • Tianhang Zheng, Changyou Chen, Junsong Yuan, Bo Li, Kui Ren
Our motivation for constructing a saliency map is by point dropping, which is a non-differentiable operator.
no code implementations • 10 Oct 2018 • Tianhang Zheng, Changyou Chen, Kui Ren
In this paper, we give a negative answer by proposing a training paradigm that is comparable to PGD adversarial training on several standard datasets, while only using noisy-natural samples.
4 code implementations • 16 Aug 2018 • Tianhang Zheng, Changyou Chen, Kui Ren
Recent work on adversarial attack has shown that Projected Gradient Descent (PGD) Adversary is a universal first-order adversary, and the classifier adversarially trained by PGD is robust against a wide range of first-order attacks.