Search Results for author: Sina AlEMohammad

Found 9 papers, 5 papers with code

Self-Improving Diffusion Models with Synthetic Data

no code implementations29 Aug 2024 Sina AlEMohammad, Ahmed Imtiaz Humayun, Shruti Agarwal, John Collomosse, Richard Baraniuk

Unfortunately, training new generative models with synthetic data from current or past generation models creates an autophagous (self-consuming) loop that degrades the quality and/or diversity of the synthetic data in what has been termed model autophagy disorder (MAD) and model collapse.

Fairness Image Generation

An Adaptive Tangent Feature Perspective of Neural Networks

1 code implementation29 Aug 2023 Daniel LeJeune, Sina AlEMohammad

In order to better understand feature learning in neural networks, we propose a framework for understanding linear models in tangent feature space where the features are allowed to be transformed during training.

Self-Consuming Generative Models Go MAD

no code implementations4 Jul 2023 Sina AlEMohammad, Josue Casco-Rodriguez, Lorenzo Luzi, Ahmed Imtiaz Humayun, Hossein Babaei, Daniel LeJeune, Ali Siahkoohi, Richard G. Baraniuk

Seismic advances in generative AI algorithms for imagery, text, and other data types has led to the temptation to use synthetic data to train next-generation models.

Diversity

NeuroView-RNN: It's About Time

no code implementations23 Feb 2022 CJ Barberan, Sina AlEMohammad, Naiming Liu, Randall Balestriero, Richard G. Baraniuk

A key interpretability issue with RNNs is that it is not clear how each hidden state per time step contributes to the decision-making process in a quantitative manner.

Decision Making Time Series +1

Covariate Balancing Methods for Randomized Controlled Trials Are Not Adversarially Robust

no code implementations25 Oct 2021 Hossein Babaei, Sina AlEMohammad, Richard Baraniuk

Covariate balancing methods increase the similarity between the distributions of the two groups' covariates.

Adversarial Attack

NFT-K: Non-Fungible Tangent Kernels

1 code implementation11 Oct 2021 Sina AlEMohammad, Hossein Babaei, CJ Barberan, Naiming Liu, Lorenzo Luzi, Blake Mason, Richard G. Baraniuk

To further contribute interpretability with respect to classification and the layers, we develop a new network as a combination of multiple neural tangent kernels, one to model each layer of the deep neural network individually as opposed to past work which attempts to represent the entire network via a single neural tangent kernel.

Enhanced Recurrent Neural Tangent Kernels for Non-Time-Series Data

2 code implementations9 Dec 2020 Sina AlEMohammad, Randall Balestriero, Zichao Wang, Richard Baraniuk

Kernels derived from deep neural networks (DNNs) in the infinite-width regime provide not only high performance in a range of machine learning tasks but also new theoretical insights into DNN training dynamics and generalization.

Time Series Time Series Analysis

Cannot find the paper you are looking for? You can Submit a new open access paper.