Search Results for author: Chester Holtz

Found 9 papers, 0 papers with code

On Robustness and Generalization of ML-Based Congestion Predictors to Valid and Imperceptible Perturbations

no code implementations29 Feb 2024 Chester Holtz, Yucheng Wang, Chung-Kuan Cheng, Bill Lin

Namely, we show that when a small number of cells (e. g. 1%-5% of cells) have their positions shifted such that a measure of global congestion is guaranteed to remain unaffected (e. g. 1% of the design adversarially shifted by 0. 001% of the layout space results in a predicted decrease in congestion of up to 90%, while no change in congestion is implied by the perturbation).

valid

Semi-Supervised Laplacian Learning on Stiefel Manifolds

no code implementations31 Jul 2023 Chester Holtz, PengWen Chen, Alexander Cloninger, Chung-Kuan Cheng, Gal Mishne

Motivated by the need to address the degeneracy of canonical Laplace learning algorithms in low label rates, we propose to reformulate graph-based semi-supervised learning as a nonconvex generalization of a \emph{Trust-Region Subproblem} (TRS).

Learning Sample Reweighting for Accuracy and Adversarial Robustness

no code implementations20 Oct 2022 Chester Holtz, Tsui-Wei Weng, Gal Mishne

There has been great interest in enhancing the robustness of neural network classifiers to defend against adversarial perturbations through adversarial training, while balancing the trade-off between robust accuracy and standard accuracy.

Adversarial Robustness Bilevel Optimization

Evaluating Disentanglement in Generative Models Without Knowledge of Latent Factors

no code implementations4 Oct 2022 Chester Holtz, Gal Mishne, Alexander Cloninger

Probabilistic generative models provide a flexible and systematic framework for learning the underlying geometry of data.

Disentanglement Fairness +2

Learning Sample Reweighting for Adversarial Robustness

no code implementations29 Sep 2021 Chester Holtz, Tsui-Wei Weng, Gal Mishne

There has been great interest in enhancing the robustness of neural network classifiers to defend against adversarial perturbations through adversarial training, while balancing the trade-off between robust accuracy and standard accuracy.

Adversarial Robustness Bilevel Optimization

Online Adversarial Purification based on Self-Supervision

no code implementations23 Jan 2021 Changhao Shi, Chester Holtz, Gal Mishne

To the best of our knowledge, our paper is the first that generalizes the idea of using self-supervised signals to perform online test-time purification.

Representation Learning

Provable Robustness by Geometric Regularization of ReLU Networks

no code implementations1 Jan 2021 Chester Holtz, Changhao Shi, Gal Mishne

Recent work has demonstrated that neural networks are vulnerable to small, adversarial perturbations of their input.

Online Adversarial Purification based on Self-supervised Learning

no code implementations ICLR 2021 Changhao Shi, Chester Holtz, Gal Mishne

Deep neural networks are known to be vulnerable to adversarial examples, where a perturbation in the input space leads to an amplified shift in the latent network representation.

Representation Learning Self-Supervised Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.