Search Results for author: Ruijie Yang

Found 5 papers, 2 papers with code

Common Knowledge Learning for Generating Transferable Adversarial Examples

no code implementations1 Jul 2023 Ruijie Yang, Yuanfang Guo, Junfu Wang, Jiantao Zhou, Yunhong Wang

Specifically, to reduce the model-specific features and obtain better output distributions, we construct a multi-teacher framework, where the knowledge is distilled from different teacher architectures into one student network.

Global Contrast Masked Autoencoders Are Powerful Pathological Representation Learners

1 code implementation18 May 2022 Hao Quan, Xingyu Li, Weixing Chen, Qun Bai, Mingchen Zou, Ruijie Yang, Tingting Zheng, Ruiqun Qi, Xinghua Gao, Xiaoyu Cui

Based on digital pathology slice scanning technology, artificial intelligence algorithms represented by deep learning have achieved remarkable results in the field of computational pathology.

Computed Tomography (CT) Self-Supervised Learning +1

iDARTS: Improving DARTS by Node Normalization and Decorrelation Discretization

no code implementations25 Aug 2021 Huiqun Wang, Ruijie Yang, Di Huang, Yunhong Wang

Differentiable ARchiTecture Search (DARTS) uses a continuous relaxation of network representation and dramatically accelerates Neural Architecture Search (NAS) by almost thousands of times in GPU-day.

Neural Architecture Search

Exploring Transferable and Robust Adversarial Perturbation Generation from the Perspective of Network Hierarchy

1 code implementation16 Aug 2021 Ruikui Wang, Yuanfang Guo, Ruijie Yang, Yunhong Wang

In this paper, we explore effective mechanisms to boost both of them from the perspective of network hierarchy, where a typical network can be hierarchically divided into output stage, intermediate stage and input stage.

A Perceptual Distortion Reduction Framework: Towards Generating Adversarial Examples with High Perceptual Quality and Attack Success Rate

no code implementations1 May 2021 Ruijie Yang, Yunhong Wang, Ruikui Wang, Yuanfang Guo

This portion of distortions, which is induced by unnecessary modifications and lack of proper perceptual distortion constraint, is the target of the proposed framework.

Adversarial Attack

Cannot find the paper you are looking for? You can Submit a new open access paper.