no code implementations • 16 Apr 2024 • Ruifeng Li, Dongzhan Zhou, Ancheng Shen, Ao Zhang, Mao Su, Mingqian Li, Hongyang Chen, Gang Chen, Yin Zhang, Shufei Zhang, Yuqiang Li, Wanli Ouyang
Overall, our work illustrates the benefits and potential of using PEMAL in AIDD and other scenarios with data scarcity and noise.
no code implementations • 10 Feb 2024 • Di Zhang, Wei Liu, Qian Tan, Jingdan Chen, Hang Yan, Yuliang Yan, Jiatong Li, Weiran Huang, Xiangyu Yue, Dongzhan Zhou, Shufei Zhang, Mao Su, Hansen Zhong, Yuqiang Li, Wanli Ouyang
ChemLLM beats GPT-3. 5 on all three principal tasks in chemistry, i. e., name conversion, molecular caption, and reaction prediction, and surpasses GPT-4 on two of them.
no code implementations • 18 Feb 2022 • Chenru Jiang, Kaizhu Huang, Shufei Zhang, Jimin Xiao, Zhenxing Niu, Amir Hussain
In this paper, we focus on tackling the precise keypoint coordinates regression task.
no code implementations • 29 Sep 2021 • Zhuang Qian, Shufei Zhang, Kaizhu Huang, Qiufeng Wang, Bin Gu, Huan Xiong, Xinping Yi
It is possibly due to the fact that the conventional adversarial training methods generate adversarial perturbations usually in a supervised way, so that the adversarial samples are highly biased towards the decision boundary, resulting in an inhomogeneous data distribution.
1 code implementation • 8 Jul 2021 • Zhuang Qian, Shufei Zhang, Kaizhu Huang, Qiufeng Wang, Rui Zhang, Xinping Yi
The proposed adversarial training with latent distribution (ATLD) method defends against adversarial attacks by crafting LMAEs with the latent manifold in an unsupervised manner.
1 code implementation • ICCV 2021 • Zhiqiang Gao, Shufei Zhang, Kaizhu Huang, Qiufeng Wang, Chaoliang Zhong
In particular, we show that the distribution discrepancy can be reduced by constraining feature gradients of two domains to have similar distributions.
no code implementations • ICLR 2020 • Shufei Zhang, Zhuang Qian, Kai-Zhu Huang, Jimin Xiao, Yuan He
Generative adversarial networks (GANs) are powerful generative models, but usually suffer from instability and generalization problem which may lead to poor generations.
no code implementations • 15 Nov 2019 • Shufei Zhang, Kai-Zhu Huang, Zenglin Xu
We propose to exploit an energy function to describe the stability and prove that reducing such energy guarantees the robustness against adversarial examples.
no code implementations • ICLR 2019 • Shufei Zhang, Kai-Zhu Huang, Rui Zhang, Amir Hussain
In this paper, we propose a generalized framework that addresses the learning problem of adversarial examples with Riemannian geometry.
no code implementations • 16 Jul 2018 • Shufei Zhang, Kai-Zhu Huang, Jianke Zhu, Yang Liu
All the existing adversarial training methods consider only how the worst perturbed examples (i. e., adversarial examples) could affect the model output.