no code implementations • 7 Mar 2023 • David Berthelot, Arnaud Autef, Jierui Lin, Dian Ang Yap, Shuangfei Zhai, Siyuan Hu, Daniel Zheng, Walter Talbot, Eric Gu
Denoising Diffusion models have demonstrated their proficiency for generative sampling.
no code implementations • 11 Jul 2020 • Josh Payne, Mario Srouji, Dian Ang Yap, Vineet Kosaraju
Modern computational organic chemistry is becoming increasingly data-driven.
no code implementations • NeurIPS Workshop Neuro_AI 2019 • Nicholas Roberts, Dian Ang Yap, Vinay Uday Prabhu
The interplay between inter-neuronal network topology and cognition has been studied deeply by connectomics researchers and network scientists, which is crucial towards understanding the remarkable efficacy of biological neural networks.
no code implementations • 18 Nov 2019 • Dian Ang Yap, Nicholas Roberts, Vinay Uday Prabhu
Kernel sparsity ("dying ReLUs") and lack of diversity are commonly observed in CNN kernels, which decreases model capacity.
no code implementations • 22 Jul 2019 • Vinay Uday Prabhu, Dian Ang Yap, Joyce Xu, John Whaley
In this paper, we harness the state-of-the-art "filter normalization" technique of loss-surface visualization to qualitatively understand the consequences of using adversarial training data augmentation as the explicit regularization technique of choice.
no code implementations • 22 Jul 2019 • Vinay Uday Prabhu, Dian Ang Yap, Alexander Wang, John Whaley
Attribute prior avoidance entails subconscious or willful non-modeling of (meta)attributes that datasets are oft born with, such as the 40 semantic facial attributes associated with the CelebA and CelebA-HQ datasets.
no code implementations • 28 May 2019 • Vinay Uday Prabhu, Dian Ang Yap
We recently observed that convolutional filters initialized farthest apart from each other using offthe- shelf pre-computed Grassmannian subspace packing codebooks performed surprisingly well across many datasets.
1 code implementation • 16 May 2019 • Vinay Uday Prabhu, Sanghyun Han, Dian Ang Yap, Mihail Douhaniaris, Preethi Seshadri, John Whaley
In this paper, we propose a Seed-Augment-Train/Transfer (SAT) framework that contains a synthetic seed image dataset generation procedure for languages with different numeral systems using freely available open font file datasets.
no code implementations • ICLR Workshop DeepGenStruct 2019 • Vinay Uday Prabhu, Sanghyun Han, Dian Ang Yap, Mihail Douhaniaris, Preethi Seshadri
In this paper, we propose a Seed-Augment-Train/Transfer (SAT) framework that contains a synthetic seed image dataset generation procedure for languages with different numeral systems using freely available open font file datasets.