Search Results for author: Jong-Hoon Oh

Found 15 papers, 1 papers with code

BERTAC: Enhancing Transformer-based Language Models with Adversarially Pretrained Convolutional Neural Networks

1 code implementation ACL 2021 Jong-Hoon Oh, Ryu Iida, Julien Kloetzer, Kentaro Torisawa

We show that on the GLUE tasks, the combination of our pretrained CNN with ALBERT outperforms the original ALBERT and achieves a similar performance to that of SOTA.

Learning Generative Models with the Up Propagation Algorithm

no code implementations NeurIPS 1997 Jong-Hoon Oh, H. Sebastian Seung

Up-propagation is an algorithm for inverting and learning neural network generative models Sensory input is processed by inverting a model that generates patterns from hidden variables using topdown connections The inversion process is iterative utilizing a negative feedback loop that depends on an error signal propagated by bottomup connections The error signal is also used to learn the generative model from examples The algorithm is benchmarked against principal component analysis in experiments on images of handwritten digits.

Cannot find the paper you are looking for? You can Submit a new open access paper.