Search Results for author: Dian Ang Yap

Found 9 papers, 1 papers with code

BERT Learns (and Teaches) Chemistry

no code implementations11 Jul 2020 Josh Payne, Mario Srouji, Dian Ang Yap, Vineet Kosaraju

Modern computational organic chemistry is becoming increasingly data-driven.

Drug Discovery

Deep Connectomics Networks: Neural Network Architectures Inspired by Neuronal Networks

no code implementations NeurIPS Workshop Neuro_AI 2019 Nicholas Roberts, Dian Ang Yap, Vinay Uday Prabhu

The interplay between inter-neuronal network topology and cognition has been studied deeply by connectomics researchers and network scientists, which is crucial towards understanding the remarkable efficacy of biological neural networks.

Grassmannian Packings in Neural Networks: Learning with Maximal Subspace Packings for Diversity and Anti-Sparsity

no code implementations18 Nov 2019 Dian Ang Yap, Nicholas Roberts, Vinay Uday Prabhu

Kernel sparsity ("dying ReLUs") and lack of diversity are commonly observed in CNN kernels, which decreases model capacity.

Understanding Adversarial Robustness Through Loss Landscape Geometries

no code implementations22 Jul 2019 Vinay Uday Prabhu, Dian Ang Yap, Joyce Xu, John Whaley

In this paper, we harness the state-of-the-art "filter normalization" technique of loss-surface visualization to qualitatively understand the consequences of using adversarial training data augmentation as the explicit regularization technique of choice.

Adversarial Robustness Data Augmentation

Covering up bias in CelebA-like datasets with Markov blankets: A post-hoc cure for attribute prior avoidance

no code implementations22 Jul 2019 Vinay Uday Prabhu, Dian Ang Yap, Alexander Wang, John Whaley

Attribute prior avoidance entails subconscious or willful non-modeling of (meta)attributes that datasets are oft born with, such as the 40 semantic facial attributes associated with the CelebA and CelebA-HQ datasets.

Attribute

Grassmannian initialization: Neural network initialization using sub-space packing

no code implementations28 May 2019 Vinay Uday Prabhu, Dian Ang Yap

We recently observed that convolutional filters initialized farthest apart from each other using offthe- shelf pre-computed Grassmannian subspace packing codebooks performed surprisingly well across many datasets.

Fonts-2-Handwriting: A Seed-Augment-Train framework for universal digit classification

1 code implementation16 May 2019 Vinay Uday Prabhu, Sanghyun Han, Dian Ang Yap, Mihail Douhaniaris, Preethi Seshadri, John Whaley

In this paper, we propose a Seed-Augment-Train/Transfer (SAT) framework that contains a synthetic seed image dataset generation procedure for languages with different numeral systems using freely available open font file datasets.

General Classification Transfer Learning

A Seed-Augment-Train Framework for Universal Digit Classification

no code implementations ICLR Workshop DeepGenStruct 2019 Vinay Uday Prabhu, Sanghyun Han, Dian Ang Yap, Mihail Douhaniaris, Preethi Seshadri

In this paper, we propose a Seed-Augment-Train/Transfer (SAT) framework that contains a synthetic seed image dataset generation procedure for languages with different numeral systems using freely available open font file datasets.

Classification Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.