Search Results for author: Xingrui Yu

Found 8 papers, 4 papers with code

Learning Efficient Planning-based Rewards for Imitation Learning

no code implementations1 Jan 2021 Xingrui Yu, Yueming Lyu, Ivor Tsang

Our method learns useful planning computations with a meaningful reward function that focuses on the resulting region of an agent executing an action.

Atari Games Continuous Control +2

A Simple Sparse Denoising Layer for Robust Deep Learning

no code implementations1 Jan 2021 Yueming Lyu, Xingrui Yu, Ivor Tsang

In this work, we take an initial step to designing a simple robust layer as a lightweight plug-in for vanilla deep models.

Denoising Dictionary Learning +1

Intrinsic Reward Driven Imitation Learning via Generative Model

1 code implementation ICML 2020 Xingrui Yu, Yueming Lyu, Ivor W. Tsang

Thus, our module provides the imitation agent both the intrinsic intention of the demonstrator and a better exploration ability, which is critical for the agent to outperform the demonstrator.

Atari Games Imitation Learning +1

Deep Adversarial Learning in Intrusion Detection: A Data Augmentation Enhanced Framework

no code implementations23 Jan 2019 He Zhang, Xingrui Yu, Peng Ren, Chunbo Luo, Geyong Min

The novelty of the proposed framework focuses on incorporating deep adversarial learning with statistical learning and exploiting learning based data augmentation.

Data Augmentation Network Intrusion Detection

Pumpout: A Meta Approach for Robustly Training Deep Neural Networks with Noisy Labels

no code implementations27 Sep 2018 Bo Han, Gang Niu, Jiangchao Yao, Xingrui Yu, Miao Xu, Ivor Tsang, Masashi Sugiyama

To handle these issues, by using the memorization effects of deep neural networks, we may train deep neural networks on the whole dataset only the first few iterations.

Memorization

Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels

5 code implementations NeurIPS 2018 Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, Masashi Sugiyama

Deep learning with noisy labels is practically challenging, as the capacity of deep models is so high that they can totally memorize these noisy labels sooner or later during training.

Learning with noisy labels Memorization

Cannot find the paper you are looking for? You can Submit a new open access paper.