Search Results for author: Alexander G. Ororbia II

Found 13 papers, 2 papers with code

Column2Vec: Structural Understanding via Distributed Representations of Database Schemas

no code implementations20 Mar 2019 Michael J. Mior, Alexander G. Ororbia II

We present Column2Vec, a distributed representation of database columns based on column metadata.

A Comparative Study of Rule Extraction for Recurrent Neural Networks

no code implementations16 Jan 2018 Qinglong Wang, Kaixuan Zhang, Alexander G. Ororbia II, Xinyu Xing, Xue Liu, C. Lee Giles

Then we empirically evaluate different recurrent networks for their performance of DFA extraction on all Tomita grammars.

Learning to Adapt by Minimizing Discrepancy

no code implementations30 Nov 2017 Alexander G. Ororbia II, Patrick Haffner, David Reitter, C. Lee Giles

We investigate the viability of a more neurocognitively-grounded approach in the context of unsupervised generative modeling of sequences.

An Empirical Evaluation of Rule Extraction from Recurrent Neural Networks

no code implementations29 Sep 2017 Qinglong Wang, Kaixuan Zhang, Alexander G. Ororbia II, Xinyu Xing, Xue Liu, C. Lee Giles

Rule extraction from black-box models is critical in domains that require model validation before implementation, as can be the case in credit scoring and medical diagnosis.

Medical Diagnosis

Learning a Hierarchical Latent-Variable Model of 3D Shapes

1 code implementation17 May 2017 Shikun Liu, C. Lee Giles, Alexander G. Ororbia II

We propose the Variational Shape Learner (VSL), a generative model that learns the underlying structure of voxelized 3D shapes in an unsupervised fashion.

3D Object Classification 3D Object Recognition +3

Learning Simpler Language Models with the Differential State Framework

no code implementations26 Mar 2017 Alexander G. Ororbia II, Tomas Mikolov, David Reitter

The Differential State Framework (DSF) is a simple and high-performing design that unifies previously introduced gated neural models.

Language Modelling

Learning Adversary-Resistant Deep Neural Networks

no code implementations5 Dec 2016 Qinglong Wang, Wenbo Guo, Kaixuan Zhang, Alexander G. Ororbia II, Xinyu Xing, Xue Liu, C. Lee Giles

Despite the superior performance of DNNs in these applications, it has been recently shown that these models are susceptible to a particular type of attack that exploits a fundamental flaw in their design.

Autonomous Vehicles

Piecewise Latent Variables for Neural Variational Text Processing

2 code implementations EMNLP (ACL) 2017 Iulian V. Serban, Alexander G. Ororbia II, Joelle Pineau, Aaron Courville

Advances in neural variational inference have facilitated the learning of powerful directed graphical models with continuous latent variables, such as variational autoencoders.

Text Generation Variational Inference

Using Non-invertible Data Transformations to Build Adversarial-Robust Neural Networks

no code implementations6 Oct 2016 Qinglong Wang, Wenbo Guo, Alexander G. Ororbia II, Xinyu Xing, Lin Lin, C. Lee Giles, Xue Liu, Peng Liu, Gang Xiong

Deep neural networks have proven to be quite effective in a wide variety of machine learning tasks, ranging from improved speech recognition systems to advancing the development of autonomous vehicles.

Autonomous Vehicles Dimensionality Reduction +2

Adversary Resistant Deep Neural Networks with an Application to Malware Detection

no code implementations5 Oct 2016 Qinglong Wang, Wenbo Guo, Kaixuan Zhang, Alexander G. Ororbia II, Xinyu Xing, C. Lee Giles, Xue Liu

However, after a thorough analysis of the fundamental flaw in DNNs, we discover that the effectiveness of current defenses is limited and, more importantly, cannot provide theoretical guarantees as to their robustness against adversarial sampled-based attacks.

Information Retrieval Malware Detection +3

Unifying Adversarial Training Algorithms with Flexible Deep Data Gradient Regularization

no code implementations26 Jan 2016 Alexander G. Ororbia II, C. Lee Giles, Daniel Kifer

Many previous proposals for adversarial training of deep neural nets have included di- rectly modifying the gradient, training on a mix of original and adversarial examples, using contractive penalties, and approximately optimizing constrained adversarial ob- jective functions.

Online Semi-Supervised Learning with Deep Hybrid Boltzmann Machines and Denoising Autoencoders

no code implementations22 Nov 2015 Alexander G. Ororbia II, C. Lee Giles, David Reitter

Two novel deep hybrid architectures, the Deep Hybrid Boltzmann Machine and the Deep Hybrid Denoising Auto-encoder, are proposed for handling semi-supervised learning problems.

Denoising

Cannot find the paper you are looking for? You can Submit a new open access paper.