Search Results for author: Jia-Li Yin

Found 6 papers, 4 papers with code

MetaFBP: Learning to Learn High-Order Predictor for Personalized Facial Beauty Prediction

1 code implementation23 Nov 2023 Luojun Lin, Zhifeng Shen, Jia-Li Yin, Qipeng Liu, Yuanlong Yu, WeiJie Chen

To this end, we propose a novel MetaFBP framework, in which we devise a universal feature extractor to capture the aesthetic commonality and then optimize to adapt the aesthetic individuality by shifting the decision boundary of the predictor via a meta-learning mechanism.

Facial Beauty Prediction Meta-Learning

An Adaptive Model Ensemble Adversarial Attack for Boosting Adversarial Transferability

1 code implementation ICCV 2023 Bin Chen, Jia-Li Yin, Shukai Chen, Bo-Hao Chen, Ximeng Liu

Alternatively, model ensemble adversarial attacks are proposed to fuse outputs from surrogate models with diverse architectures to get an ensemble loss, making the generated adversarial example more likely to transfer to other models as it can fool multiple models concurrently.

Adversarial Attack

SRoUDA: Meta Self-training for Robust Unsupervised Domain Adaptation

1 code implementation12 Dec 2022 Wanqing Zhu, Jia-Li Yin, Bo-Hao Chen, Ximeng Liu

In this paper, we present a new meta self-training pipeline, named SRoUDA, for improving adversarial robustness of UDA models.

Adversarial Robustness Unsupervised Domain Adaptation

Global Learnable Attention for Single Image Super-Resolution

1 code implementation2 Dec 2022 Jian-Nan Su, Min Gan, Guang-Yong Chen, Jia-Li Yin, C. L. Philip Chen

Utilizing this finding, we proposed a Global Learnable Attention (GLA) to adaptively modify similarity scores of non-local textures during training instead of only using a fixed similarity scoring function such as the dot product.

Image Super-Resolution

Push Stricter to Decide Better: A Class-Conditional Feature Adaptive Framework for Improving Adversarial Robustness

no code implementations1 Dec 2021 Jia-Li Yin, Lehui Xie, Wanqing Zhu, Ximeng Liu, Bo-Hao Chen

However, most of the existing adversarial training methods focus on improving the robust accuracy by strengthening the adversarial examples but neglecting the increasing shift between natural data and adversarial examples, leading to a dramatic decrease in natural accuracy.

Adversarial Robustness

Robust Single-step Adversarial Training with Regularizer

no code implementations5 Feb 2021 Lehui Xie, Yaopeng Wang, Jia-Li Yin, Ximeng Liu

Previous methods try to reduce the computational burden of adversarial training using single-step adversarial example generation schemes, which can effectively improve the efficiency but also introduce the problem of catastrophic overfitting, where the robust accuracy against Fast Gradient Sign Method (FGSM) can achieve nearby 100\% whereas the robust accuracy against Projected Gradient Descent (PGD) suddenly drops to 0\% over a single epoch.

Cannot find the paper you are looking for? You can Submit a new open access paper.