1 code implementation • 28 May 2024 • Sunay Bhat, Jeffrey Jiang, Omead Pooladzandi, Alexander Branch, Gregory Pottie
Train-time data poisoning attacks threaten machine learning models by introducing adversarial examples during training, leading to misclassification.
1 code implementation • 28 May 2024 • Omead Pooladzandi, Jeffrey Jiang, Sunay Bhat, Gregory Pottie
Data poisoning attacks pose a significant threat to the integrity of machine learning models by leading to misclassification of target distribution data by injecting adversarial examples during training.
no code implementations • 6 Mar 2023 • Omead Pooladzandi, Jeffrey Jiang, Sunay Bhat, Gregory Pottie
We propose a composable framework for latent space image augmentation that allows for easy combination of multiple augmentations.
1 code implementation • 20 Oct 2022 • Jeffrey Jiang, Omead Pooladzandi, Sunay Bhat, Gregory Pottie
We show that the variational version of the architecture, Causal Structural Variational Hypothesis Testing can improve performance in low SNR regimes.
no code implementations • 4 Jul 2022 • Sunay Bhat, Jeffrey Jiang, Omead Pooladzandi, Gregory Pottie
Our proposed method combines a causal latent space VAE model with specific modification to emphasize causal fidelity, enabling finer control over the causal layer and the ability to learn a robust intervention framework.