Search Results for author: Jingtong Su

Found 6 papers, 3 papers with code

On the Robustness of Neural Collapse and the Neural Collapse of Robustness

1 code implementation13 Nov 2023 Jingtong Su, Ya Shi Zhang, Nikolaos Tsilivis, Julia Kempe

We further analyze the geometry of networks that are optimized to be robust against adversarial perturbations of the input, and find that Neural Collapse is a pervasive phenomenon in these cases as well, with clean and perturbed representations forming aligned simplices, and giving rise to a robust simple nearest-neighbor classifier.

Wavelets Beat Monkeys at Adversarial Robustness

no code implementations19 Apr 2023 Jingtong Su, Julia Kempe

2) Replacing the front-end VOneBlock by an off-the-shelf parameter-free Scatternet followed by simple uniform Gaussian noise can achieve much more substantial adversarial robustness without adversarial training.

Adversarial Attack Adversarial Robustness

On the Connection between Invariant Learning and Adversarial Training for Out-of-Distribution Generalization

no code implementations18 Dec 2022 Shiji Xin, Yifei Wang, Jingtong Su, Yisen Wang

Extensive experiments show that our proposed DAT can effectively remove domain-varying features and improve OOD generalization under both correlation shift and diversity shift.

Diversity Out-of-Distribution Generalization

Can we achieve robustness from data alone?

1 code implementation24 Jul 2022 Nikolaos Tsilivis, Jingtong Su, Julia Kempe

In parallel, we revisit prior work that also focused on the problem of data optimization for robust classification \citep{Ily+19}, and show that being robust to adversarial attacks after standard (gradient descent) training on a suitable dataset is more challenging than previously thought.

Meta-Learning regression +1

Domain-wise Adversarial Training for Out-of-Distribution Generalization

no code implementations29 Sep 2021 Shiji Xin, Yifei Wang, Jingtong Su, Yisen Wang

Extensive experiments show that our proposed DAT can effectively remove the domain-varying features and improve OOD generalization on both correlation shift and diversity shift tasks.

Diversity Out-of-Distribution Generalization

Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot

1 code implementation NeurIPS 2020 Jingtong Su, Yihang Chen, Tianle Cai, Tianhao Wu, Ruiqi Gao, Li-Wei Wang, Jason D. Lee

In this paper, we conduct sanity checks for the above beliefs on several recent unstructured pruning methods and surprisingly find that: (1) A set of methods which aims to find good subnetworks of the randomly-initialized network (which we call "initial tickets"), hardly exploits any information from the training data; (2) For the pruned networks obtained by these methods, randomly changing the preserved weights in each layer, while keeping the total number of preserved weights unchanged per layer, does not affect the final performance.

Network Pruning

Cannot find the paper you are looking for? You can Submit a new open access paper.