no code implementations • NeurIPS 1997 • Jong-Hoon Oh, H. Sebastian Seung
Up-propagation is an algorithm for inverting and learning neural network generative models Sensory input is processed by inverting a model that generates patterns from hidden variables using topdown connections The inversion process is iterative utilizing a negative feedback loop that depends on an error signal propagated by bottomup connections The error signal is also used to learn the generative model from examples The algorithm is benchmarked against principal component analysis in experiments on images of handwritten digits.
no code implementations • COLING 2016 • Junta Mizuno, Masahiro Tanaka, Kiyonori Ohtake, Jong-Hoon Oh, Julien Kloetzer, Chikara Hashimoto, Kentaro Torisawa
We demonstrate our large-scale NLP systems: WISDOM X, DISAANA, and D-SUMM.
no code implementations • ACL 2019 • Jong-Hoon Oh, Kazuma Kadowaki, Julien Kloetzer, Ryu Iida, Kentaro Torisawa
In this paper, we propose a method for why-question answering (why-QA) that uses an adversarial learning framework.
no code implementations • IJCNLP 2019 • Kazuma Kadowaki, Ryu Iida, Kentaro Torisawa, Jong-Hoon Oh, Julien Kloetzer
Furthermore, we investigate the effect of supplying background knowledge to our classifiers.
1 code implementation • ACL 2021 • Jong-Hoon Oh, Ryu Iida, Julien Kloetzer, Kentaro Torisawa
We show that on the GLUE tasks, the combination of our pretrained CNN with ALBERT outperforms the original ALBERT and achieves a similar performance to that of SOTA.