no code implementations • ICML 2020 • Yanxi Li, Minjing Dong, Yunhe Wang, Chang Xu
This paper searches for the optimal neural architecture by minimizing a proxy of validation loss.
1 code implementation • 16 Aug 2024 • Hefei Mei, Minjing Dong, Chang Xu
To alleviate this issue, we redesign the diffusion framework from generating high-quality images to predicting distinguishable image labels.
1 code implementation • 26 Jul 2024 • Yuheng Shi, Minjing Dong, Mingjia Li, Chang Xu
Recently, State Space Duality (SSD), an improved variant of SSMs, was introduced in Mamba2 to enhance model performance and efficiency.
1 code implementation • 23 May 2024 • Yuheng Shi, Minjing Dong, Chang Xu
To improve the performance of SSMs in vision tasks, a multi-scan strategy is widely adopted, which leads to significant redundancy of SSMs.
no code implementations • CVPR 2024 • Huihui Gong, Minjing Dong, Siqi Ma, Seyit Camtepe, Surya Nepal, Chang Xu
Recognizing the challenge posed by the structural disparities between ViTs and CNNs we introduce a novel module input-independent random entangled self-attention (II-ReSA).
1 code implementation • 11 Oct 2023 • Yunke Wang, Minjing Dong, Yukun Zhao, Bo Du, Chang Xu
In the first step, we apply a forward diffusion process to smooth potential noises in imperfect demonstrations by introducing additional noise.
no code implementations • 28 Sep 2023 • Huihui Gong, Minjing Dong, Siqi Ma, Seyit Camtepe, Surya Nepal, Chang Xu
Adversarial training serves as one of the most popular and effective methods to defend against adversarial perturbations.
no code implementations • 18 Sep 2023 • Huihui Gong, Minjing Dong, Siqi Ma, Seyit Camtepe, Surya Nepal, Chang Xu
Moreover, to ameliorate the phenomenon of sub-optimization with one fixed style, we propose to discover the optimal style given a target through style optimization in a continuous relaxation manner.
no code implementations • 23 Aug 2023 • Linwei Tao, Younan Zhu, Haolan Guo, Minjing Dong, Chang Xu
As far as we are aware, our research represents the first large-scale investigation into calibration properties and the premier study of calibration issues within NAS.
1 code implementation • 16 Jul 2023 • Xiaohuan Pei, Yanxi Li, Minjing Dong, Chang Xu
With the increasing number of new neural architecture designs and substantial existing neural architectures, it becomes difficult for the researchers to situate their contributions compared with existing neural architectures or establish the connections between their designs and other relevant ones.
1 code implementation • 23 May 2023 • Linwei Tao, Minjing Dong, Chang Xu
While different variants of focal loss have been explored, it is difficult to find a balance between over-confidence and under-confidence.
1 code implementation • 13 Feb 2023 • Linwei Tao, Minjing Dong, Daochang Liu, Changming Sun, Chang Xu
However, early stopping, as a well-known technique to mitigate overfitting, fails to calibrate networks.
1 code implementation • CVPR 2023 • Minjing Dong, Chang Xu
Deep Neural Networks show superior performance in various tasks but are vulnerable to adversarial attacks.
1 code implementation • 26 Oct 2022 • Haoyu Xie, Changqi Wang, Mingkai Zheng, Minjing Dong, Shan You, Chong Fu, Chang Xu
In prevalent pixel-wise contrastive learning solutions, the model maps pixels to deterministic representations and regularizes them in the latent space.
1 code implementation • International Conference on Machine Learning 2022 • Yanxi Li, Xinghao Chen, Minjing Dong, Yehui Tang, Yunhe Wang, Chang Xu
Recently, neural architectures with all Multi-layer Perceptrons (MLPs) have attracted great research interest from the computer vision community.
Ranked #500 on Image Classification on ImageNet
no code implementations • NeurIPS 2021 • Xinghao Chen, Chang Xu, Minjing Dong, Chunjing Xu, Yunhe Wang
Adder neural networks (AdderNets) have shown impressive performance on image classification with only addition operations, which are more energy efficient than traditional convolutional neural networks built with multiplications.
no code implementations • NeurIPS 2021 • Minjing Dong, Yunhe Wang, Xinghao Chen, Chang Xu
Adder neural network (AdderNet) replaces the original convolutions with massive multiplications by cheap additions while achieving comparable performance thus yields a series of energy-efficient neural networks.
no code implementations • NeurIPS 2021 • Minjing Dong, Yunhe Wang, Xinghao Chen, Chang Xu
Adder neural networks (ANNs) are designed for low energy cost which replace expensive multiplications in convolutional neural networks (CNNs) with cheaper additions to yield energy-efficient neural networks and hardware accelerations.
no code implementations • 2 Sep 2020 • Minjing Dong, Yanxi Li, Yunhe Wang, Chang Xu
We explore the relationship among adversarial robustness, Lipschitz constant, and architecture parameters and show that an appropriate constraint on architecture parameters could reduce the Lipschitz constant to further improve the robustness.