no code implementations • 20 Mar 2019 • Michael J. Mior, Alexander G. Ororbia II
We present Column2Vec, a distributed representation of database columns based on column metadata.
no code implementations • 16 Jan 2018 • Qinglong Wang, Kaixuan Zhang, Alexander G. Ororbia II, Xinyu Xing, Xue Liu, C. Lee Giles
Then we empirically evaluate different recurrent networks for their performance of DFA extraction on all Tomita grammars.
no code implementations • 30 Nov 2017 • Alexander G. Ororbia II, Patrick Haffner, David Reitter, C. Lee Giles
We investigate the viability of a more neurocognitively-grounded approach in the context of unsupervised generative modeling of sequences.
no code implementations • 29 Sep 2017 • Qinglong Wang, Kaixuan Zhang, Alexander G. Ororbia II, Xinyu Xing, Xue Liu, C. Lee Giles
Rule extraction from black-box models is critical in domains that require model validation before implementation, as can be the case in credit scoring and medical diagnosis.
1 code implementation • 17 May 2017 • Shikun Liu, C. Lee Giles, Alexander G. Ororbia II
We propose the Variational Shape Learner (VSL), a generative model that learns the underlying structure of voxelized 3D shapes in an unsupervised fashion.
Ranked #6 on 3D Object Recognition on ModelNet40
no code implementations • 26 Mar 2017 • Alexander G. Ororbia II, Tomas Mikolov, David Reitter
The Differential State Framework (DSF) is a simple and high-performing design that unifies previously introduced gated neural models.
no code implementations • 5 Dec 2016 • Qinglong Wang, Wenbo Guo, Kaixuan Zhang, Alexander G. Ororbia II, Xinyu Xing, Xue Liu, C. Lee Giles
Despite the superior performance of DNNs in these applications, it has been recently shown that these models are susceptible to a particular type of attack that exploits a fundamental flaw in their design.
2 code implementations • EMNLP (ACL) 2017 • Iulian V. Serban, Alexander G. Ororbia II, Joelle Pineau, Aaron Courville
Advances in neural variational inference have facilitated the learning of powerful directed graphical models with continuous latent variables, such as variational autoencoders.
no code implementations • 6 Oct 2016 • Qinglong Wang, Wenbo Guo, Alexander G. Ororbia II, Xinyu Xing, Lin Lin, C. Lee Giles, Xue Liu, Peng Liu, Gang Xiong
Deep neural networks have proven to be quite effective in a wide variety of machine learning tasks, ranging from improved speech recognition systems to advancing the development of autonomous vehicles.
no code implementations • 5 Oct 2016 • Qinglong Wang, Wenbo Guo, Kaixuan Zhang, Alexander G. Ororbia II, Xinyu Xing, C. Lee Giles, Xue Liu
However, after a thorough analysis of the fundamental flaw in DNNs, we discover that the effectiveness of current defenses is limited and, more importantly, cannot provide theoretical guarantees as to their robustness against adversarial sampled-based attacks.
no code implementations • 3 Jun 2016 • Alexander G. Ororbia II, Fridolin Linder, Joshua Snoke
We evaluate the three methods on measures of risk and utility.
no code implementations • 26 Jan 2016 • Alexander G. Ororbia II, C. Lee Giles, Daniel Kifer
Many previous proposals for adversarial training of deep neural nets have included di- rectly modifying the gradient, training on a mix of original and adversarial examples, using contractive penalties, and approximately optimizing constrained adversarial ob- jective functions.
no code implementations • 22 Nov 2015 • Alexander G. Ororbia II, C. Lee Giles, David Reitter
Two novel deep hybrid architectures, the Deep Hybrid Boltzmann Machine and the Deep Hybrid Denoising Auto-encoder, are proposed for handling semi-supervised learning problems.