Search Results for author: Tianhang Zheng

Found 14 papers, 4 papers with code

Profanity-Avoiding Training Framework for Seq2seq Models with Certified Robustness

no code implementations EMNLP 2021 Hengtong Zhang, Tianhang Zheng, Yaliang Li, Jing Gao, Lu Su, Bo Li

To address this problem, we propose a training framework with certified robustness to eliminate the causes that trigger the generation of profanity.

Dialogue Generation Style Transfer

FedReview: A Review Mechanism for Rejecting Poisoned Updates in Federated Learning

no code implementations26 Feb 2024 Tianhang Zheng, Baochun Li

Federated learning has recently emerged as a decentralized approach to learn a high-performance model without access to user data.

Federated Learning

Separable Multi-Concept Erasure from Diffusion Models

1 code implementation3 Feb 2024 Mengnan Zhao, Lihe Zhang, Tianhang Zheng, Yuqiu Kong, BaoCai Yin

Large-scale diffusion models, known for their impressive image generation capabilities, have raised concerns among researchers regarding social impacts, such as the imitation of copyrighted artistic styles.

Image Generation Machine Unlearning

Fair Text-to-Image Diffusion via Fair Mapping

no code implementations29 Nov 2023 Jia Li, Lijie Hu, Jingfeng Zhang, Tianhang Zheng, Hua Zhang, Di Wang

In this paper, we address the limitations of existing text-to-image diffusion models in generating demographically fair results when given human-related descriptions.

Fairness Text-to-Image Generation

PGD-2 can be better than FGSM + GradAlign

no code implementations29 Sep 2021 Tianhang Zheng, Baochun Li

In this paper, we show that PGD-2 AT with random initialization (PGD-2-RS AT) and attack step size $\alpha=1. 25\epsilon/2$ only needs approximately a half computational cost of FGSM + GradAlign AT and actually can avoid catastrophic overfitting for large $\ell_\infty$ perturbations.

Towards Assessment of Randomized Smoothing Mechanisms for Certifying Adversarial Robustness

no code implementations15 May 2020 Tianhang Zheng, Di Wang, Baochun Li, Jinhui Xu

Based on our framework, we assess the Gaussian and Exponential mechanisms by comparing the magnitude of additive noise required by these mechanisms and the lower bounds (criteria).

Adversarial Robustness

Towards Understanding the Adversarial Vulnerability of Skeleton-based Action Recognition

no code implementations14 May 2020 Tianhang Zheng, Sheng Liu, Changyou Chen, Junsong Yuan, Baochun Li, Kui Ren

We first formulate generation of adversarial skeleton actions as a constrained optimization problem by representing or approximating the physiological and physical constraints with mathematical formulations.

Action Recognition Skeleton Based Action Recognition

A Unified framework for randomized smoothing based certified defenses

no code implementations25 Sep 2019 Tianhang Zheng, Di Wang, Baochun Li, Jinhui Xu

We answer the above two questions by first demonstrating that Gaussian mechanism and Exponential mechanism are the (near) optimal options to certify the $\ell_2$ and $\ell_\infty$-normed robustness.

Data Poisoning Attack against Knowledge Graph Embedding

no code implementations26 Apr 2019 Hengtong Zhang, Tianhang Zheng, Jing Gao, Chenglin Miao, Lu Su, Yaliang Li, Kui Ren

Knowledge graph embedding (KGE) is a technique for learning continuous embeddings for entities and relations in the knowledge graph. Due to its benefit to a variety of downstream tasks such as knowledge graph completion, question answering and recommendation, KGE has gained significant attention recently.

Data Poisoning Knowledge Graph Completion +2

PointCloud Saliency Maps

3 code implementations ICCV 2019 Tianhang Zheng, Changyou Chen, Junsong Yuan, Bo Li, Kui Ren

Our motivation for constructing a saliency map is by point dropping, which is a non-differentiable operator.

Is PGD-Adversarial Training Necessary? Alternative Training via a Soft-Quantization Network with Noisy-Natural Samples Only

no code implementations10 Oct 2018 Tianhang Zheng, Changyou Chen, Kui Ren

In this paper, we give a negative answer by proposing a training paradigm that is comparable to PGD adversarial training on several standard datasets, while only using noisy-natural samples.

Adversarial Attack Quantization

Distributionally Adversarial Attack

4 code implementations16 Aug 2018 Tianhang Zheng, Changyou Chen, Kui Ren

Recent work on adversarial attack has shown that Projected Gradient Descent (PGD) Adversary is a universal first-order adversary, and the classifier adversarially trained by PGD is robust against a wide range of first-order attacks.

Adversarial Attack

Cannot find the paper you are looking for? You can Submit a new open access paper.