Search Results for author: Minjing Dong

Found 15 papers, 6 papers with code

Imitation Learning from Purified Demonstration

no code implementations11 Oct 2023 Yunke Wang, Minjing Dong, Bo Du, Chang Xu

To tackle these problems, we propose to purify the potential perturbations in imperfect demonstrations and subsequently conduct imitation learning from purified demonstrations.

Imitation Learning

Parameter-Saving Adversarial Training: Reinforcing Multi-Perturbation Robustness via Hypernetworks

no code implementations28 Sep 2023 Huihui Gong, Minjing Dong, Siqi Ma, Seyit Camtepe, Surya Nepal, Chang Xu

Adversarial training serves as one of the most popular and effective methods to defend against adversarial perturbations.

Stealthy Physical Masked Face Recognition Attack via Adversarial Style Optimization

no code implementations18 Sep 2023 Huihui Gong, Minjing Dong, Siqi Ma, Seyit Camtepe, Surya Nepal, Chang Xu

Moreover, to ameliorate the phenomenon of sub-optimization with one fixed style, we propose to discover the optimal style given a target through style optimization in a continuous relaxation manner.

Face Recognition

A Benchmark Study on Calibration

no code implementations23 Aug 2023 Linwei Tao, Younan Zhu, Haolan Guo, Minjing Dong, Chang Xu

As far as we are aware, our research represents the first large-scale investigation into calibration properties and the premier study of calibration issues within NAS.

Neural Architecture Search

Neural Architecture Retrieval

1 code implementation16 Jul 2023 Xiaohuan Pei, Yanxi Li, Minjing Dong, Chang Xu

With the increasing number of new neural architecture designs and substantial existing neural architectures, it becomes difficult for the researchers to situate their contributions compared with existing neural architectures or establish the connections between their designs and other relevant ones.

Contrastive Learning Graph Representation Learning +1

Dual Focal Loss for Calibration

1 code implementation23 May 2023 Linwei Tao, Minjing Dong, Chang Xu

While different variants of focal loss have been explored, it is difficult to find a balance between over-confidence and under-confidence.

Calibrating a Deep Neural Network with Its Predecessors

1 code implementation13 Feb 2023 Linwei Tao, Minjing Dong, Daochang Liu, Changming Sun, Chang Xu

However, early stopping, as a well-known technique to mitigate overfitting, fails to calibrate networks.

Adversarial Robustness via Random Projection Filters

1 code implementation CVPR 2023 Minjing Dong, Chang Xu

Deep Neural Networks show superior performance in various tasks but are vulnerable to adversarial attacks.

Adversarial Robustness Attribute +1

Boosting Semi-Supervised Semantic Segmentation with Probabilistic Representations

1 code implementation26 Oct 2022 Haoyu Xie, Changqi Wang, Mingkai Zheng, Minjing Dong, Shan You, Chong Fu, Chang Xu

In prevalent pixel-wise contrastive learning solutions, the model maps pixels to deterministic representations and regularizes them in the latent space.

Contrastive Learning Semi-Supervised Semantic Segmentation

An Empirical Study of Adder Neural Networks for Object Detection

no code implementations NeurIPS 2021 Xinghao Chen, Chang Xu, Minjing Dong, Chunjing Xu, Yunhe Wang

Adder neural networks (AdderNets) have shown impressive performance on image classification with only addition operations, which are more energy efficient than traditional convolutional neural networks built with multiplications.

Autonomous Driving Face Detection +3

Towards Stable and Robust AdderNets

no code implementations NeurIPS 2021 Minjing Dong, Yunhe Wang, Xinghao Chen, Chang Xu

Adder neural network (AdderNet) replaces the original convolutions with massive multiplications by cheap additions while achieving comparable performance thus yields a series of energy-efficient neural networks.

Adversarial Robustness

Handling Long-tailed Feature Distribution in AdderNets

no code implementations NeurIPS 2021 Minjing Dong, Yunhe Wang, Xinghao Chen, Chang Xu

Adder neural networks (ANNs) are designed for low energy cost which replace expensive multiplications in convolutional neural networks (CNNs) with cheaper additions to yield energy-efficient neural networks and hardware accelerations.

Knowledge Distillation

Adversarially Robust Neural Architectures

no code implementations2 Sep 2020 Minjing Dong, Yanxi Li, Yunhe Wang, Chang Xu

We explore the relationship among adversarial robustness, Lipschitz constant, and architecture parameters and show that an appropriate constraint on architecture parameters could reduce the Lipschitz constant to further improve the robustness.

Adversarial Attack Adversarial Robustness

Cannot find the paper you are looking for? You can Submit a new open access paper.