no code implementations • 29 Aug 2024 • Sina AlEMohammad, Ahmed Imtiaz Humayun, Shruti Agarwal, John Collomosse, Richard Baraniuk
Unfortunately, training new generative models with synthetic data from current or past generation models creates an autophagous (self-consuming) loop that degrades the quality and/or diversity of the synthetic data in what has been termed model autophagy disorder (MAD) and model collapse.
Ranked #1 on Image Generation on ImageNet 64x64
1 code implementation • 29 Aug 2023 • Daniel LeJeune, Sina AlEMohammad
In order to better understand feature learning in neural networks, we propose a framework for understanding linear models in tangent feature space where the features are allowed to be transformed during training.
no code implementations • 4 Jul 2023 • Sina AlEMohammad, Josue Casco-Rodriguez, Lorenzo Luzi, Ahmed Imtiaz Humayun, Hossein Babaei, Daniel LeJeune, Ali Siahkoohi, Richard G. Baraniuk
Seismic advances in generative AI algorithms for imagery, text, and other data types has led to the temptation to use synthetic data to train next-generation models.
1 code implementation • 1 Nov 2022 • Lorenzo Luzi, Daniel LeJeune, Ali Siahkoohi, Sina AlEMohammad, Vishwanath Saragadam, Hossein Babaei, Naiming Liu, Zichao Wang, Richard G. Baraniuk
We study the interpolation capabilities of implicit neural representations (INRs) of images.
no code implementations • 23 Feb 2022 • CJ Barberan, Sina AlEMohammad, Naiming Liu, Randall Balestriero, Richard G. Baraniuk
A key interpretability issue with RNNs is that it is not clear how each hidden state per time step contributes to the decision-making process in a quantitative manner.
no code implementations • 25 Oct 2021 • Hossein Babaei, Sina AlEMohammad, Richard Baraniuk
Covariate balancing methods increase the similarity between the distributions of the two groups' covariates.
1 code implementation • 11 Oct 2021 • Sina AlEMohammad, Hossein Babaei, CJ Barberan, Naiming Liu, Lorenzo Luzi, Blake Mason, Richard G. Baraniuk
To further contribute interpretability with respect to classification and the layers, we develop a new network as a combination of multiple neural tangent kernels, one to model each layer of the deep neural network individually as opposed to past work which attempts to represent the entire network via a single neural tangent kernel.
2 code implementations • 9 Dec 2020 • Sina AlEMohammad, Randall Balestriero, Zichao Wang, Richard Baraniuk
Kernels derived from deep neural networks (DNNs) in the infinite-width regime provide not only high performance in a range of machine learning tasks but also new theoretical insights into DNN training dynamics and generalization.
1 code implementation • 27 Oct 2020 • Sina AlEMohammad, Hossein Babaei, Randall Balestriero, Matt Y. Cheung, Ahmed Imtiaz Humayun, Daniel LeJeune, Naiming Liu, Lorenzo Luzi, Jasper Tan, Zichao Wang, Richard G. Baraniuk
High dimensionality poses many challenges to the use of data, from visualization and interpretation, to prediction and storage for historical preservation.