1 code implementation • 28 May 2024 • Sunay Bhat, Jeffrey Jiang, Omead Pooladzandi, Alexander Branch, Gregory Pottie
Train-time data poisoning attacks threaten machine learning models by introducing adversarial examples during training, leading to misclassification.
1 code implementation • 28 May 2024 • Omead Pooladzandi, Jeffrey Jiang, Sunay Bhat, Gregory Pottie
Data poisoning attacks pose a significant threat to the integrity of machine learning models by leading to misclassification of target distribution data by injecting adversarial examples during training.
1 code implementation • 7 Feb 2024 • Omead Pooladzandi, Xi-Lin Li
We present a novel approach to accelerate stochastic gradient descent (SGD) by utilizing curvature information obtained from Hessian-vector products or finite differences of parameters and gradients, similar to the BFGS algorithm.
no code implementations • 6 Mar 2023 • Omead Pooladzandi, Jeffrey Jiang, Sunay Bhat, Gregory Pottie
We propose a composable framework for latent space image augmentation that allows for easy combination of multiple augmentations.
no code implementations • 31 Jan 2023 • Omead Pooladzandi, Pasha Khosravi, Erik Nijkamp, Baharan Mirzasoleiman
Generative models have the ability to synthesize data points drawn from the data distribution, however, not all generated samples are high quality.
1 code implementation • 17 Dec 2022 • Omead Pooladzandi, Yiming Zhou
We explore the usage of the Levenberg-Marquardt (LM) algorithm for regression (non-linear least squares) and classification (generalized Gauss-Newton methods) tasks in neural networks.
1 code implementation • 20 Oct 2022 • Jeffrey Jiang, Omead Pooladzandi, Sunay Bhat, Gregory Pottie
We show that the variational version of the architecture, Causal Structural Variational Hypothesis Testing can improve performance in low SNR regimes.
no code implementations • 28 Jul 2022 • Omead Pooladzandi, David Davini, Baharan Mirzasoleiman
We propose AdaCore, a method that leverages the geometry of the data to extract subsets of the training examples for efficient machine learning.
no code implementations • 4 Jul 2022 • Sunay Bhat, Jeffrey Jiang, Omead Pooladzandi, Gregory Pottie
Our proposed method combines a causal latent space VAE model with specific modification to emphasize causal fidelity, enabling finer control over the causal layer and the ability to learn a robust intervention framework.
no code implementations • 6 May 2022 • Arash Vahabpour, Tianyi Wang, QIUJING LU, Omead Pooladzandi, Vwani Roychowdhury
Imitation learning is the task of replicating expert policy from demonstrations, without access to a reward function.
no code implementations • 29 Sep 2021 • Arash Vahabpour, QIUJING LU, Tianyi Wang, Omead Pooladzandi, Vwani Roychowhury
To address this problem, we introduce a novel generative model for behavior cloning, in a mode-separating manner.