Search Results for author: Yunpeng Gong

Found 9 papers, 4 papers with code

Adversarial Learning for Neural PDE Solvers with Sparse Data

no code implementations4 Sep 2024 Yunpeng Gong, Yongjie Hou, Zhenzhong Wang, Zexin Lin, Min Jiang

Neural network solvers for partial differential equations (PDEs) have made significant progress, yet they continue to face challenges related to data scarcity and model robustness.

Data Augmentation

Beyond Dropout: Robust Convolutional Neural Networks Based on Local Feature Masking

no code implementations18 Jul 2024 Yunpeng Gong, Chuangliang Zhang, Yongjie Hou, Lifei Chen, Min Jiang

In the contemporary of deep learning, where models often grapple with the challenge of simultaneously achieving robustness against adversarial attacks and strong generalization capabilities, this study introduces an innovative Local Feature Masking (LFM) strategy aimed at fortifying the performance of Convolutional Neural Networks (CNNs) on both fronts.

Adversarial Attack Adversarial Robustness +1

Beyond Augmentation: Empowering Model Robustness under Extreme Capture Environments

no code implementations18 Jul 2024 Yunpeng Gong, Yongjie Hou, Chuangliang Zhang, Min Jiang

This method improves the model's generalization under extreme conditions and enables learning diverse features, thus better addressing the challenges in re-ID.

Data Augmentation Person Re-Identification

Cross-Task Attack: A Self-Supervision Generative Framework Based on Attention Shift

no code implementations18 Jul 2024 Qingyuan Zeng, Yunpeng Gong, Min Jiang

Studying adversarial attacks on artificial intelligence (AI) systems helps discover model shortcomings, enabling the construction of a more robust system.

Adversarial Attack

Exploring Color Invariance through Image-Level Ensemble Learning

1 code implementation19 Jan 2024 Yunpeng Gong, Jiaquan Li, Lifei Chen, Min Jiang

This issue is particularly pronounced in complex wide-area surveillance scenarios, such as person re-identification and industrial dust segmentation, where models often experience a decline in performance due to overfitting on color information during training, given the presence of environmental variations.

Data Augmentation Ensemble Learning +2

Person Re-identification Method Based on Color Attack and Joint Defence

2 code implementations18 Nov 2021 Yunpeng Gong, Liqing Huang, Lifei Chen

Finally, a series of experimental results show that the proposed joint adversarial defense method is more competitive than a state-of-the-art methods.

Adversarial Defense Metric Learning +1

A Person Re-identification Data Augmentation Method with Adversarial Defense Effect

2 code implementations21 Jan 2021 Yunpeng Gong, Zhiyong Zeng, Liwen Chen, Yifan Luo, Bin Weng, Feng Ye

This method can not only improve the accuracy of the model, but also help the model defend against adversarial examples; 2) Multi-Modal Defense, it integrates three homogeneous modal images of visible, grayscale and sketch, and further strengthens the defense ability of the model.

Adversarial Defense Data Augmentation +3

Eliminate Deviation with Deviation for Data Augmentation and a General Multi-modal Data Learning Method

1 code implementation21 Jan 2021 Yunpeng Gong, Liqing Huang, Lifei Chen

Experiments on several ReID baselines and three common large-scale datasets such as Market1501, DukeMTMC, and MSMT17 have verified the effectiveness of this method.

Ranked #2 on Person Re-Identification on Market-1501 (using extra training data)

Adversarial Defense Data Augmentation +4

Cannot find the paper you are looking for? You can Submit a new open access paper.