Search Results for author: Jingtong Su

Found 3 papers, 2 papers with code

Can we achieve robustness from data alone?

1 code implementation24 Jul 2022 Nikolaos Tsilivis, Jingtong Su, Julia Kempe

Adversarial training and its variants have come to be the prevailing methods to achieve adversarially robust classification using neural networks.

Meta-Learning Robust classification

Domain-wise Adversarial Training for Out-of-Distribution Generalization

no code implementations29 Sep 2021 Shiji Xin, Yifei Wang, Jingtong Su, Yisen Wang

Extensive experiments show that our proposed DAT can effectively remove the domain-varying features and improve OOD generalization on both correlation shift and diversity shift tasks.

Out-of-Distribution Generalization

Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot

1 code implementation NeurIPS 2020 Jingtong Su, Yihang Chen, Tianle Cai, Tianhao Wu, Ruiqi Gao, Li-Wei Wang, Jason D. Lee

In this paper, we conduct sanity checks for the above beliefs on several recent unstructured pruning methods and surprisingly find that: (1) A set of methods which aims to find good subnetworks of the randomly-initialized network (which we call "initial tickets"), hardly exploits any information from the training data; (2) For the pruned networks obtained by these methods, randomly changing the preserved weights in each layer, while keeping the total number of preserved weights unchanged per layer, does not affect the final performance.

Network Pruning

Cannot find the paper you are looking for? You can Submit a new open access paper.