no code implementations • 26 Dec 2024 • Haonan He, Yuchen Ren, Yining Tang, Ziyang Xu, Junxian Li, Minghao Yang, Di Zhang, Dong Yuan, Tao Chen, Shufei Zhang, Yuqiang Li, Nanqing Dong, Wanli Ouyang, Dongzhan Zhou, Peng Ye
Large language models have already demonstrated their formidable capabilities in general domains, ushering in a revolutionary transformation.
1 code implementation • 18 Nov 2024 • Lechao Cheng, KaiFeng Chen, Jiyang Li, Shengeng Tang, Shufei Zhang, Meng Wang
Learning from noisy data has become essential for adapting deep learning models to real-world applications.
no code implementations • 4 Oct 2024 • Jianpeng Chen, Yawen Ling, Yazhou Ren, Zichen Wen, Tianyi Wu, Shufei Zhang, Lifang He
With the increasing prevalence of graph-structured data, multi-view graph clustering has been widely used in various downstream applications.
1 code implementation • 3 Oct 2024 • Di Zhang, Jianbo Wu, Jingdi Lei, Tong Che, Jiatong Li, Tong Xie, Xiaoshui Huang, Shufei Zhang, Marco Pavone, Yuqiang Li, Wanli Ouyang, Dongzhan Zhou
This paper presents an advanced mathematical problem-solving framework, LLaMA-Berry, for enhancing the mathematical reasoning ability of Large Language Models (LLMs).
1 code implementation • 14 Aug 2024 • Junxian Li, Di Zhang, Xunzhi Wang, Zeying Hao, Jingdi Lei, Qian Tan, Cai Zhou, Wei Liu, Yaotian Yang, Xinrui Xiong, Weiyun Wang, Zhe Chen, Wenhai Wang, Wei Li, Shufei Zhang, Mao Su, Wanli Ouyang, Yuqiang Li, Dongzhan Zhou
We benchmark ChemVLM against a range of open-source and proprietary multimodal large language models on various tasks.
no code implementations • 16 Apr 2024 • Ruifeng Li, Dongzhan Zhou, Ancheng Shen, Ao Zhang, Mao Su, Mingqian Li, Hongyang Chen, Gang Chen, Yin Zhang, Shufei Zhang, Yuqiang Li, Wanli Ouyang
Overall, our work illustrates the benefits and potential of using PEMAL in AIDD and other scenarios with data scarcity and noise.
1 code implementation • 10 Feb 2024 • Di Zhang, Wei Liu, Qian Tan, Jingdan Chen, Hang Yan, Yuliang Yan, Jiatong Li, Weiran Huang, Xiangyu Yue, Wanli Ouyang, Dongzhan Zhou, Shufei Zhang, Mao Su, Han-sen Zhong, Yuqiang Li
However, the community lacks an LLM specifically designed for chemistry.
no code implementations • 18 Feb 2022 • Chenru Jiang, Kaizhu Huang, Shufei Zhang, Jimin Xiao, Zhenxing Niu, Amir Hussain
In this paper, we focus on tackling the precise keypoint coordinates regression task.
no code implementations • 29 Sep 2021 • Zhuang Qian, Shufei Zhang, Kaizhu Huang, Qiufeng Wang, Bin Gu, Huan Xiong, Xinping Yi
It is possibly due to the fact that the conventional adversarial training methods generate adversarial perturbations usually in a supervised way, so that the adversarial samples are highly biased towards the decision boundary, resulting in an inhomogeneous data distribution.
1 code implementation • 8 Jul 2021 • Zhuang Qian, Shufei Zhang, Kaizhu Huang, Qiufeng Wang, Rui Zhang, Xinping Yi
The proposed adversarial training with latent distribution (ATLD) method defends against adversarial attacks by crafting LMAEs with the latent manifold in an unsupervised manner.
1 code implementation • ICCV 2021 • Zhiqiang Gao, Shufei Zhang, Kaizhu Huang, Qiufeng Wang, Chaoliang Zhong
In particular, we show that the distribution discrepancy can be reduced by constraining feature gradients of two domains to have similar distributions.
no code implementations • ICLR 2020 • Shufei Zhang, Zhuang Qian, Kai-Zhu Huang, Jimin Xiao, Yuan He
Generative adversarial networks (GANs) are powerful generative models, but usually suffer from instability and generalization problem which may lead to poor generations.
no code implementations • 15 Nov 2019 • Shufei Zhang, Kai-Zhu Huang, Zenglin Xu
We propose to exploit an energy function to describe the stability and prove that reducing such energy guarantees the robustness against adversarial examples.
no code implementations • ICLR 2019 • Shufei Zhang, Kai-Zhu Huang, Rui Zhang, Amir Hussain
In this paper, we propose a generalized framework that addresses the learning problem of adversarial examples with Riemannian geometry.
no code implementations • 16 Jul 2018 • Shufei Zhang, Kai-Zhu Huang, Jianke Zhu, Yang Liu
All the existing adversarial training methods consider only how the worst perturbed examples (i. e., adversarial examples) could affect the model output.