no code implementations • 12 Jun 2013 • Kyunghyun Cho
In this paper, a simple, general method of adding auxiliary stochastic neurons to a multi-layer perceptron is proposed.
no code implementations • 17 Jun 2013 • Kyunghyun Cho, Xi Chen
The gesture recognition using motion capture data and depth sensors has recently drawn more attention in vision recognition.
no code implementations • 7 Nov 2013 • Caglar Gulcehre, Kyunghyun Cho, Razvan Pascanu, Yoshua Bengio
In this paper we propose and investigate a novel nonlinear unit, called $L_p$ unit, for deep neural networks.
no code implementations • 24 Nov 2013 • Yoshua Bengio, Li Yao, Kyunghyun Cho
Several interesting generative learning algorithms involve a complex probability distribution over many random variables, involving intractable normalization constants or latent variable normalization.
no code implementations • 20 Dec 2013 • Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Yoshua Bengio
Based on this observation, we propose two novel architectures of a deep RNN which are orthogonal to an earlier attempt of stacking multiple recurrent layers to build a deep RNN (Schmidhuber, 1992; El Hihi and Bengio, 1996).
no code implementations • NeurIPS 2014 • Guido Montúfar, Razvan Pascanu, Kyunghyun Cho, Yoshua Bengio
We study the complexity of functions computable by deep feedforward neural networks with piecewise linear activations in terms of the symmetries and the number of linear regions that they have.
43 code implementations • 3 Jun 2014 • Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, Yoshua Bengio
In this paper, we propose a novel neural network model called RNN Encoder-Decoder that consists of two recurrent neural networks (RNN).
Ranked #47 on Machine Translation on WMT2014 English-French
1 code implementation • 5 Jun 2014 • Tapani Raiko, Li Yao, Kyunghyun Cho, Yoshua Bengio
Training of the neural autoregressive density estimator (NADE) can be viewed as doing one step of probabilistic inference on missing values in data.
Ranked #7 on Image Generation on Binarized MNIST
4 code implementations • NeurIPS 2014 • Yann Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, Yoshua Bengio
Gradient descent or quasi-Newton methods are almost ubiquitously used to perform such minimizations, and it is often thought that a main source of difficulty for these local methods to find the global minimum is the proliferation of local minima with much higher error than the global minimum.
no code implementations • 28 Jun 2014 • Kyunghyun Cho, Yoshua Bengio
Conditional computation has been proposed as a way to increase the capacity of a deep neural network without increasing the amount of computation required, by activating some parameters and computation "on-demand", on a per-example basis.
121 code implementations • 1 Sep 2014 • Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio
Neural machine translation is a recently proposed approach to machine translation.
Ranked #4 on Dialogue Generation on Persona-Chat (using extra training data)
no code implementations • 2 Sep 2014 • Li Yao, Sherjil Ozair, Kyunghyun Cho, Yoshua Bengio
Orderless NADEs are trained based on a criterion that stochastically maximizes $P(\mathbf{x})$ with all possible orders of factorizations.
no code implementations • WS 2014 • Jean Pouget-Abadie, Dzmitry Bahdanau, Bart van Merrienboer, Kyunghyun Cho, Yoshua Bengio
The authors of (Cho et al., 2014a) have shown that the recently introduced neural network translation systems suffer from a significant drop in translation quality when translating long sentences, unlike existing phrase-based translation systems.
2 code implementations • 3 Sep 2014 • Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, Yoshua Bengio
In this paper, we focus on analyzing the properties of the neural machine translation using two models; RNN Encoder--Decoder and a newly proposed gated recursive convolutional neural network.
no code implementations • 2 Oct 2014 • Felix Hill, Kyunghyun Cho, Sebastien Jean, Coline Devin, Yoshua Bengio
Neural language models learn word representations that capture rich linguistic and conceptual information.
1 code implementation • NeurIPS 2014 • Tapani Raiko, Yao Li, Kyunghyun Cho, Yoshua Bengio
Training of the neural autoregressive density estimator (NADE) can be viewed as doing one step of probabilistic inference on missing values in data.
Ranked #8 on Image Generation on Binarized MNIST
no code implementations • 4 Dec 2014 • Jan Chorowski, Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio
We replace the Hidden Markov Model (HMM) which is traditionally used in in continuous speech recognition with a bi-directional recurrent neural network encoder coupled to a recurrent neural network decoder that directly emits a stream of phonemes.
1 code implementation • IJCNLP 2015 • Sébastien Jean, Kyunghyun Cho, Roland Memisevic, Yoshua Bengio
The models trained by the proposed approach are empirically found to outperform the baseline models with a small vocabulary as well as the LSTM-based neural machine translation models.
13 code implementations • 11 Dec 2014 • Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, Yoshua Bengio
In this paper we compare different types of recurrent units in recurrent neural networks (RNNs).
Ranked #10 on Music Modeling on JSB Chorales
no code implementations • 19 Dec 2014 • Felix Hill, Kyunghyun Cho, Sebastien Jean, Coline Devin, Yoshua Bengio
Here we investigate the embeddings learned by neural machine translation models, a recently-developed class of neural language model.
no code implementations • 9 Feb 2015 • Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, Yoshua Bengio
In this work, we propose a novel recurrent neural network (RNN) architecture.
88 code implementations • 10 Feb 2015 • Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio
Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images.
5 code implementations • ICCV 2015 • Li Yao, Atousa Torabi, Kyunghyun Cho, Nicolas Ballas, Christopher Pal, Hugo Larochelle, Aaron Courville
In this context, we propose an approach that successfully takes into account both the local and global temporal structure of videos to produce descriptions.
no code implementations • 11 Mar 2015 • Caglar Gulcehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, Loic Barrault, Huei-Chi Lin, Fethi Bougares, Holger Schwenk, Yoshua Bengio
Recent work on end-to-end neural network-based architectures for machine translation has shown promising results for En-Fr and En-De translation.
2 code implementations • TACL 2016 • Felix Hill, Kyunghyun Cho, Anna Korhonen, Yoshua Bengio
Distributional models that learn rich semantic word representations are a success story of recent NLP research.
4 code implementations • 3 May 2015 • Francesco Visin, Kyle Kastner, Kyunghyun Cho, Matteo Matteucci, Aaron Courville, Yoshua Bengio
In this paper, we propose a deep neural network architecture for object recognition based on recurrent neural networks.
Ranked #34 on Image Classification on MNIST
14 code implementations • NeurIPS 2015 • Jan Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, Yoshua Bengio
Recurrent sequence generators conditioned on input data through an attention mechanism have recently shown very good performance on a range of tasks in- cluding machine translation, handwriting synthesis and image caption gen- eration.
Ranked #17 on Speech Recognition on TIMIT
no code implementations • 4 Jul 2015 • Kyunghyun Cho, Aaron Courville, Yoshua Bengio
Whereas deep neural networks were first mostly used for classification tasks, they are rapidly expanding in the realm of structured output problems, where the observed target is composed of multiple random variables that have a rich joint distribution, given the input.
no code implementations • 11 Nov 2015 • Tian Wang, Kyunghyun Cho
In the experi- ments, we evaluate the proposed approach while varying the number of context sentences and observe that the proposed late fusion is superior to the usual way of incorporating additional inputs to the LSTM.
1 code implementation • 14 Nov 2015 • Li Yao, Nicolas Ballas, Kyunghyun Cho, John R. Smith, Yoshua Bengio
The task of associating images and videos with a natural language description has attracted a great amount of attention recently.
1 code implementation • NeurIPS 2016 • R. Devon Hjelm, Kyunghyun Cho, Junyoung Chung, Russ Salakhutdinov, Vince Calhoun, Nebojsa Jojic
Variational methods that rely on a recognition network to approximate the posterior of directed graphical models offer better inference and learning than previous methods.
no code implementations • 19 Nov 2015 • Quan Gan, Qipeng Guo, Zheng Zhang, Kyunghyun Cho
In this paper, we propose and study a novel visual object tracking approach based on convolutional networks and recurrent networks.
no code implementations • 19 Nov 2015 • Marcin Moczulski, Kelvin Xu, Aaron Courville, Kyunghyun Cho
Recently there has been growing interest in building active visual object recognizers, as opposed to the usual passive recognizers which classifies a given static image into a predefined set of object categories.
2 code implementations • 22 Nov 2015 • Francesco Visin, Marco Ciccone, Adriana Romero, Kyle Kastner, Kyunghyun Cho, Yoshua Bengio, Matteo Matteucci, Aaron Courville
Moreover, ReNet layers are stacked on top of pre-trained convolutional layers, benefiting from generic local features.
Ranked #18 on Semantic Segmentation on CamVid
1 code implementation • 24 Nov 2015 • Kyunghyun Cho
This is a lecture note for the course DS-GA 3001 <Natural Language Understanding with Distributed Representation> at the Center for Data Science , New York University in Fall, 2015.
no code implementations • NAACL 2016 • Orhan Firat, Kyunghyun Cho, Yoshua Bengio
We propose multi-way, multilingual neural machine translation.
no code implementations • 1 Feb 2016 • Yijun Xiao, Kyunghyun Cho
Document classification tasks were primarily tackled at word level.
1 code implementation • NeurIPS 2016 • Rodrigo Nogueira, Kyunghyun Cho
We propose a goal-driven web navigation as a benchmark task for evaluating an agent with abilities to understand natural language and plan on partially observed environments.
1 code implementation • NAACL 2016 • Felix Hill, Kyunghyun Cho, Anna Korhonen
Unsupervised methods for learning distributed representations of words are ubiquitous in today's NLP research, but far less is known about the best ways to learn distributed phrase or sentence representations from unlabelled data.
Ranked #16 on Subjectivity Analysis on SUBJ
2 code implementations • ACL 2016 • Junyoung Chung, Kyunghyun Cho, Yoshua Bengio
The existing machine translation systems, whether phrase-based or neural, have relied almost exclusively on word-level modelling with explicit segmentation.
Ranked #3 on Machine Translation on WMT2015 English-German
1 code implementation • 9 May 2016 • The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang
Since its introduction, it has been one of the most used CPU and GPU mathematical compilers - especially in the machine learning community - and has shown steady performance improvements.
no code implementations • 12 May 2016 • Kyunghyun Cho
Recent advances in conditional recurrent language modelling have mainly focused on network architectures (e. g., attention mechanism), learning algorithms (e. g., scheduled sampling and sequence-level training) and novel applications (e. g., image/video description generation, speech recognition, etc.)
1 code implementation • 20 May 2016 • Jiakai Zhang, Kyunghyun Cho
A policy function trained in this way however is known to suffer from unexpected behaviours due to the mismatch between the states reachable by the reference policy and trained policy functions.
7 code implementations • 6 Jun 2016 • Zhengping Che, Sanjay Purushotham, Kyunghyun Cho, David Sontag, Yan Liu
Multivariate time series data in practical applications, such as health care, geoscience, and biology, are characterized by a variety of missing values.
Ranked #4 on Multivariate Time Series Imputation on MuJoCo
Multivariate Time Series Forecasting Multivariate Time Series Imputation +3
2 code implementations • EMNLP 2016 • Yasumasa Miyamoto, Kyunghyun Cho
We introduce a recurrent neural network language model (RNN-LM) with long short-term memory (LSTM) units that utilizes both character-level and word-level inputs.
no code implementations • 7 Jun 2016 • Kyunghyun Cho, Masha Esipova
We investigate the potential of attention-based neural machine translation in simultaneous translation.
no code implementations • 8 Jun 2016 • Amjad Almahairi, Kyunghyun Cho, Nizar Habash, Aaron Courville
Neural machine translation has become a major alternative to widely used phrase-based statistical machine translation.
no code implementations • EMNLP 2016 • Orhan Firat, Baskaran Sankaran, Yaser Al-Onaizan, Fatos T. Yarman Vural, Kyunghyun Cho
In this paper, we propose a novel finetuning algorithm for the recently introduced multi-way, mulitlingual neural machine translate that enables zero-resource machine translation.
no code implementations • COLING 2016 • Amrita Saha, Mitesh M. Khapra, Sarath Chandar, Janarthanan Rajendran, Kyunghyun Cho
However, there is no parallel training data available between X and Y but, training data is available between X & Z and Z & Y (as is often the case in many real world applications).
no code implementations • 30 Jun 2016 • Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio
We investigate the mechanisms and effects of learning to read and write into a memory through experiments on Facebook bAbI tasks using both a feedforward and GRUcontroller.
Ranked #5 on Question Answering on bAbi
1 code implementation • 3 Jul 2016 • Heeyoul Choi, Kyunghyun Cho, Yoshua Bengio
Based on this observation, in this paper we propose to contextualize the word embedding vectors using a nonlinear bag-of-words representation of the source sentence.
no code implementations • 17 Aug 2016 • Keunwoo Choi, George Fazekas, Brian McFee, Kyunghyun Cho, Mark Sandler
Descriptions are often provided along with recommendations to help users' discovery.
13 code implementations • 14 Sep 2016 • Keunwoo Choi, George Fazekas, Mark Sandler, Kyunghyun Cho
We introduce a convolutional recurrent neural network (CRNN) for music tagging.
1 code implementation • EACL 2017 • Jiatao Gu, Graham Neubig, Kyunghyun Cho, Victor O. K. Li
Translating in real-time, a. k. a.
2 code implementations • TACL 2017 • Jason Lee, Kyunghyun Cho, Thomas Hofmann
We observe that on CS-EN, FI-EN and RU-EN, the quality of the multilingual character-level translation even surpasses the models specifically trained on that language pair alone, both in terms of BLEU score and human judgment.
no code implementations • 3 Nov 2016 • R. Devon Hjelm, Eswar Damaraju, Kyunghyun Cho, Helmut Laufs, Sergey M. Plis, Vince Calhoun
We introduce a novel recurrent neural network (RNN) approach to account for temporal dynamics and dependencies in brain networks observed via functional magnetic resonance imaging (fMRI).
no code implementations • 4 Nov 2016 • Hyo-Eun Kim, Sangheum Hwang, Kyunghyun Cho
From the base model, we introduce a semantic noise modeling method which enables class-conditional perturbation on latent space to enhance the representational power of learned latent feature.
5 code implementations • 2 Feb 2017 • Gilles Louppe, Kyunghyun Cho, Cyril Becot, Kyle Cranmer
Recent progress in applying machine learning for jet physics has been built upon an analogy between calorimeters and images.
1 code implementation • EMNLP 2017 • Jiatao Gu, Kyunghyun Cho, Victor O. K. Li
Instead of trying to build a new decoding algorithm for any specific decoding objective, we propose the idea of trainable decoding algorithm in which we train a decoding algorithm to find a translation that maximizes an arbitrary decoding objective.
1 code implementation • ACL 2017 • Akiko Eriguchi, Yoshimasa Tsuruoka, Kyunghyun Cho
There has been relatively little attention to incorporating linguistic prior to neural machine translation.
6 code implementations • 27 Feb 2017 • R. Devon Hjelm, Athul Paul Jacob, Tong Che, Adam Trischler, Kyunghyun Cho, Yoshua Bengio
We introduce a method for training GANs with discrete data that uses the estimated difference measure from the discriminator to compute importance weights for generated samples, thus providing a policy gradient for training the generator.
4 code implementations • EACL 2017 • Rico Sennrich, Orhan Firat, Kyunghyun Cho, Alexandra Birch, Barry Haddow, Julian Hitschler, Marcin Junczys-Dowmunt, Samuel Läubli, Antonio Valerio Miceli Barone, Jozef Mokry, Maria Nădejde
We present Nematus, a toolkit for Neural Machine Translation.
2 code implementations • 21 Mar 2017 • Krzysztof J. Geras, Stacey Wolfson, Yiqiu Shen, Nan Wu, S. Gene Kim, Eric Kim, Laura Heacock, Ujas Parikh, Linda Moy, Kyunghyun Cho
In our work, we propose to use a multi-view deep convolutional neural network that handles a set of high-resolution medical images.
3 code implementations • 27 Mar 2017 • Keunwoo Choi, György Fazekas, Mark Sandler, Kyunghyun Cho
In this paper, we present a transfer learning approach for music classification and regression tasks.
2 code implementations • EMNLP 2017 • Rodrigo Nogueira, Kyunghyun Cho
In this work, we introduce a query reformulation system based on a neural network that rewrites a query to maximize the number of relevant documents returned.
no code implementations • 17 Apr 2017 • Sebastien Jean, Stanislas Lauly, Orhan Firat, Kyunghyun Cho
We propose a neural machine translation architecture that models the surrounding text in addition to the source sentence.
3 code implementations • 18 Apr 2017 • Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur Guney, Volkan Cirik, Kyunghyun Cho
We publicly release a new large-scale dataset, called SearchQA, for machine comprehension, or question-answering.
no code implementations • 20 Apr 2017 • Cem M. Deniz, Siyuan Xiang, Spencer Hallyburton, Arakua Welbeck, James S. Babb, Stephen Honig, Kyunghyun Cho, Gregory Chang
However, manual segmentation of MR images of bone is time-consuming, limiting the use of MRI measurements in the clinical practice.
no code implementations • 20 May 2017 • Jiatao Gu, Yong Wang, Kyunghyun Cho, Victor O. K. Li
In this paper, we extend an attention-based neural machine translation (NMT) model by allowing it to access an entire training set of parallel sentence pairs even after training.
1 code implementation • ICLR 2018 • Katrina Evtimova, Andrew Drozdov, Douwe Kiela, Kyunghyun Cho
Inspired by previous work on emergent communication in referential games, we propose a novel multi-modal, multi-step referential game, where the sender and receiver have access to distinct modalities of an object, and their information exchange is bidirectional and of arbitrary duration.
no code implementations • 7 Jun 2017 • Keunwoo Choi, George Fazekas, Kyunghyun Cho, Mark Sandler
The results highlight several important aspects of music tagging and neural networks.
1 code implementation • ACL 2018 • Lifu Huang, Heng Ji, Kyunghyun Cho, Clare R. Voss
Most previous event extraction studies have relied heavily on features derived from annotated event mentions, thus cannot be applied to new event types without annotation effort.
1 code implementation • WS 2017 • Kyunghyun Cho
This paper describes a builder entry, named "strawman", to the sentence-level sentiment analysis task of the "Build It, Break It" shared task of the First Workshop on Building Linguistically Generalizable NLP Systems.
no code implementations • WS 2017 • Sebastien Jean, Stanislas Lauly, Orhan Firat, Kyunghyun Cho
In this paper we present our systems for the DiscoMT 2017 cross-lingual pronoun prediction shared task.
1 code implementation • 6 Sep 2017 • Keunwoo Choi, György Fazekas, Kyunghyun Cho, Mark Sandler
In this paper, we empirically investigate the effect of audio preprocessing on music tagging with deep neural networks.
2 code implementations • 13 Sep 2017 • Keunwoo Choi, György Fazekas, Kyunghyun Cho, Mark Sandler
Following their success in Computer Vision and other areas, deep learning techniques have recently become widely adopted in Music Information Retrieval (MIR) research.
no code implementations • 22 Sep 2017 • Tian Wang, Kyunghyun Cho
The goal of personalized history-based recommendation is to automatically output a distribution over all the items given a sequence of previous purchases of a user.
no code implementations • ICLR 2018 • Jason Lee, Kyunghyun Cho, Jason Weston, Douwe Kiela
While most machine translation systems to date are trained on large parallel corpora, humans learn language in a different way: by being grounded in an environment and interacting with other humans.
no code implementations • 12 Oct 2017 • Meihao Chen, Zhuoru Lin, Kyunghyun Cho
It is a usual practice to ignore any structural information underlying classes in multi-class classification.
2 code implementations • ICLR 2018 • Mikel Artetxe, Gorka Labaka, Eneko Agirre, Kyunghyun Cho
In spite of the recent success of neural machine translation (NMT) in standard benchmarks, the lack of large parallel corpora poses a major practical problem for many language pairs.
Ranked #6 on Machine Translation on WMT2015 English-German
1 code implementation • 10 Nov 2017 • Nan Wu, Krzysztof J. Geras, Yiqiu Shen, Jingyi Su, S. Gene Kim, Eric Kim, Stacey Wolfson, Linda Moy, Kyunghyun Cho
Breast density classification is an essential part of breast cancer screening.
no code implementations • ICLR 2018 • Sean Welleck, Zixin Yao, Yu Gai, Jialin Mao, Zheng Zhang, Kyunghyun Cho
In this paper, we propose a novel multiset loss function by viewing this problem from the perspective of sequential decision making.
no code implementations • NeurIPS 2017 • Sean Welleck, Jialin Mao, Kyunghyun Cho, Zheng Zhang
Humans process visual scenes selectively and sequentially using attention.
no code implementations • ICLR 2018 • Elman Mansimov, Kyunghyun Cho
As this policy does not require any optimization, it allows us to investigate the underlying difficulty of a task without being distracted by optimization difficulty of a learning algorithm.
no code implementations • ICLR 2018 • R. Devon Hjelm, Athul Paul Jacob, Adam Trischler, Gerry Che, Kyunghyun Cho, Yoshua Bengio
We introduce a method for training GANs with discrete data that uses the estimated difference measure from the discriminator to compute importance weights for generated samples, thus providing a policy gradient for training the generator.
2 code implementations • EMNLP 2018 • Jason Lee, Elman Mansimov, Kyunghyun Cho
We propose a conditional non-autoregressive neural sequence model based on iterative refinement.
Ranked #5 on Machine Translation on IWSLT2015 German-English
no code implementations • 26 Feb 2018 • Jake Zhao, Kyunghyun Cho
We propose a retrieval-augmented convolutional network and propose to train it with local mixup, a novel variant of the recently proposed mixup algorithm.
no code implementations • 19 Mar 2018 • Noah Weber, Leena Shekhar, Niranjan Balasubramanian, Kyunghyun Cho
Attention-based neural abstractive summarization systems equipped with copy mechanisms have shown promising results.
no code implementations • 30 Mar 2018 • Heeyoul Choi, Kyunghyun Cho, Yoshua Bengio
Neural machine translation (NMT) has been a new paradigm in machine translation, and the attention mechanism has become the dominant approach with the state-of-the-art records in many language pairs.
no code implementations • NAACL 2018 • Phu Mon Htut, Samuel R. Bowman, Kyunghyun Cho
In recent years, there have been amazing advances in deep learning methods for machine reading.
no code implementations • 19 Apr 2018 • Cinjon Resnick, Ilya Kulikov, Kyunghyun Cho, Jason Weston
Interest in emergent communication has recently surged in Machine Learning.
3 code implementations • EMNLP 2018 • Douwe Kiela, Changhan Wang, Kyunghyun Cho
While one of the first steps in many NLP systems is selecting what pre-trained word embeddings to use, we argue that such a step is better left for neural networks to figure out by themselves.
Ranked #52 on Natural Language Inference on SNLI
no code implementations • EMNLP 2018 • Lifu Huang, Kyunghyun Cho, Boliang Zhang, Heng Ji, Kevin Knight
We construct a multilingual common semantic space based on distributional semantics, where words from multiple languages are projected into a shared space to enable knowledge and resource transfer across languages.
1 code implementation • EMNLP 2018 • Yun Chen, Victor O. K. Li, Kyunghyun Cho, Samuel R. Bowman
Beam search is a widely used approximate search strategy for neural network decoders, and it generally outperforms simple greedy decoding on tasks like machine translation.
4 code implementations • 30 Apr 2018 • Seokho Kang, Kyunghyun Cho
Although machine learning has been successfully used to propose novel molecules that satisfy desired properties, it is still challenging to explore a large chemical space efficiently.
1 code implementation • ICLR 2019 • Konrad Zolna, Krzysztof J. Geras, Kyunghyun Cho
To address this problem, we propose classifier-agnostic saliency map extraction, which finds all parts of the image that any classifier could use, not just one given in advance.
3 code implementations • ICLR 2019 • Xiaodong Gu, Kyunghyun Cho, Jung-Woo Ha, Sunghun Kim
Variational autoencoders~(VAEs) have shown a promise in data-driven conversation modeling.
no code implementations • 18 Jun 2018 • Amjad Almahairi, Kyle Kastner, Kyunghyun Cho, Aaron Courville
However, interestingly, the greater modeling power offered by the recurrent neural network appears to undermine the model's ability to act as a regularizer of the product representations.
no code implementations • WS 2018 • Changhan Wang, Kyunghyun Cho, Douwe Kiela
We describe our work for the CALCS 2018 shared task on named entity recognition on code-switched data.
1 code implementation • 18 Jul 2018 • Cinjon Resnick, Roberta Raileanu, Sanyam Kapoor, Alexander Peysakhovich, Kyunghyun Cho, Joan Bruna
Our contributions are that we analytically characterize the types of environments where Backplay can improve training speed, demonstrate the effectiveness of Backplay both in large grid worlds and a complex four player zero-sum game (Pommerman), and show that Backplay compares favorably to other competitive methods known to improve sample efficiency.
no code implementations • EMNLP 2018 • Jiatao Gu, Yong Wang, Yun Chen, Kyunghyun Cho, Victor O. K. Li
We frame low-resource translation as a meta-learning problem, and we learn to adapt to low-resource languages based on multilingual high-resource language tasks.
1 code implementation • EMNLP (ACL) 2018 • Phu Mon Htut, Kyunghyun Cho, Samuel R. Bowman
A substantial thread of recent work on latent tree learning has attempted to develop neural network models with parse-valued latent variables and train them on non-parsing tasks, in the hope of having them discover interpretable tree structure.
1 code implementation • WS 2018 • Jasmijn Bastings, Marco Baroni, Jason Weston, Kyunghyun Cho, Douwe Kiela
Lake and Baroni (2018) recently introduced the SCAN data set, which consists of simple commands paired with action sequences and is intended to test the strong generalization abilities of recurrent sequence-to-sequence models.
no code implementations • 27 Sep 2018 • Adji B. Dieng, Kyunghyun Cho, David M. Blei, Yann Lecun
Furthermore, the reflective likelihood objective prevents posterior collapse when used to train stochastic auto-encoders with amortized inference.
no code implementations • 27 Sep 2018 • Jason Lee, Kyunghyun Cho, Douwe Kiela
While reinforcement learning (RL) shows a lot of promise for natural language processing—e. g.
no code implementations • ICLR 2019 • Cheolhyoung Lee, Kyunghyun Cho, Wanmo Kang
We empirically verify our result using deep convolutional networks and observe a higher correlation between the gradient stochasticity and the proposed directional uniformity than that against the gradient norm stochasticity, suggesting that the directional statistics of minibatch gradients is a major factor behind SGD.
no code implementations • EMNLP 2018 • Rujun Han, Michael Gill, Arthur Spirling, Kyunghyun Cho
Conventional word embedding models do not leverage information from document meta-data, and they do not model uncertainty.
no code implementations • ACL 2019 • Sean Welleck, Jason Weston, Arthur Szlam, Kyunghyun Cho
Consistency is a long standing issue faced by dialogue models.
1 code implementation • WS 2019 • Ilia Kulikov, Alexander H. Miller, Kyunghyun Cho, Jason Weston
We investigate the impact of search strategies in neural dialogue modeling.
6 code implementations • 13 Jan 2019 • Rodrigo Nogueira, Kyunghyun Cho
Recently, neural models pretrained on a language modeling task, such as ELMo (Peters et al., 2017), OpenAI GPT (Radford et al., 2018), and BERT (Devlin et al., 2018), have achieved impressive results on various natural language processing tasks such as question-answering and natural language inference.
Ranked #3 on Passage Re-Ranking on MS MARCO (using extra training data)
1 code implementation • IJCNLP 2019 • Laura Graesser, Kyunghyun Cho, Douwe Kiela
In this work, we propose a computational framework in which agents equipped with communication capabilities simultaneously play a series of referential games, where agents are trained using deep reinforcement learning.
no code implementations • TACL 2019 • Jiatao Gu, Qi Liu, Kyunghyun Cho
Conventional neural autoregressive decoding commonly assumes a fixed left-to-right generation order, which may be sub-optimal.
1 code implementation • WS 2019 • Sean Welleck, Kianté Brantley, Hal Daumé III, Kyunghyun Cho
Standard sequential generation methods assume a pre-specified generation order, such as text generation methods which generate words from left to right.
11 code implementations • WS 2019 • Alex Wang, Kyunghyun Cho
We show that BERT (Devlin et al., 2018) is a Markov random field language model.
5 code implementations • 19 Feb 2019 • Mate Kisantal, Zbigniew Wojna, Jakub Murawski, Jacek Naruniec, Kyunghyun Cho
We evaluate different pasting augmentation strategies, and ultimately, we achieve 9. 7\% relative improvement on the instance segmentation and 7. 1\% on the object detection of small objects, compared to the current state of the art method on
1 code implementation • 11 Mar 2019 • Siavash Golkar, Michael Kagan, Kyunghyun Cho
We introduce Continual Learning via Neural Pruning (CLNP), a new method aimed at lifelong learning in fixed capacity models based on neuronal model sparsification.
no code implementations • 12 Mar 2019 • Sébastien Jean, Kyunghyun Cho
By comparing performance using actual and random contexts, we show that a model trained with the proposed algorithm is more sensitive to the additional context.
2 code implementations • 20 Mar 2019 • Nan Wu, Jason Phang, Jungkyu Park, Yiqiu Shen, Zhe Huang, Masha Zorin, Stanisław Jastrzębski, Thibault Févry, Joe Katsnelson, Eric Kim, Stacey Wolfson, Ujas Parikh, Sushma Gaddam, Leng Leng Young Lin, Kara Ho, Joshua D. Weinstein, Beatriu Reig, Yiming Gao, Hildegard Toth, Kristine Pysarenko, Alana Lewin, Jiyon Lee, Krystal Airola, Eralda Mema, Stephanie Chung, Esther Hwang, Naziya Samreen, S. Gene Kim, Laura Heacock, Linda Moy, Kyunghyun Cho, Krzysztof J. Geras
We present a deep convolutional neural network for breast cancer screening exam classification, trained and evaluated on over 200, 000 exams (over 1, 000, 000 images).
1 code implementation • 31 Mar 2019 • Elman Mansimov, Omar Mahmood, Seokho Kang, Kyunghyun Cho
Conventional conformation generation methods minimize hand-designed molecular force field energy functions that are often not well correlated with the true energy function of a molecule observed in nature.
5 code implementations • 17 Apr 2019 • Rodrigo Nogueira, Wei Yang, Jimmy Lin, Kyunghyun Cho
One technique to improve the retrieval effectiveness of a search engine is to expand documents with terms that are related or representative of the documents' content. From the perspective of a question answering system, this might comprise questions the document can potentially answer.
Ranked #1 on Passage Re-Ranking on TREC-PM
1 code implementation • 29 Apr 2019 • Jihun Oh, Kyunghyun Cho, Joan Bruna
As an efficient and scalable graph neural network, GraphSAGE has enabled an inductive capability for inferring unseen nodes or graphs by aggregating subsampled local neighborhoods and by learning in a mini-batch gradient descent fashion.
no code implementations • ICLR 2019 • Yu Gai, Zheng Zhang, Kyunghyun Cho
Many important classification performance metrics, e. g. $F$-measure, are non-differentiable and non-decomposable, and are thus unfriendly to gradient descent algorithm.
no code implementations • ICLR 2019 • Cinjon Resnick, Roberta Raileanu, Sanyam Kapoor, Alexander Peysakhovich, Kyunghyun Cho, Joan Bruna
Our contributions are that we analytically characterize the types of environments where Backplay can improve training speed, demonstrate the effectiveness of Backplay both in large grid worlds and a complex four player zero-sum game (Pommerman), and show that Backplay compares favorably to other competitive methods known to improve sample efficiency.
no code implementations • 14 May 2019 • Siavash Golkar, Kyunghyun Cho
We introduce a novel algorithm for the detection of possible sample corruption such as mislabeled samples in a training dataset given a small clean validation set.
no code implementations • 24 May 2019 • Iddo Drori, Yamuna Krishnamurthy, Raoni Lourenco, Remi Rampin, Kyunghyun Cho, Claudio Silva, Juliana Freire
Automatic machine learning is an important problem in the forefront of machine learning.
no code implementations • RANLP 2019 • Sean Welleck, Kyunghyun Cho
We propose a method for non-projective dependency parsing by incrementally predicting a set of edges.
no code implementations • 28 May 2019 • Owen Marschall, Kyunghyun Cho, Cristina Savin
To learn useful dynamics on long time scales, neurons must use plasticity rules that account for long-term, circuit-wide effects of synaptic changes.
1 code implementation • 29 May 2019 • Elman Mansimov, Alex Wang, Sean Welleck, Kyunghyun Cho
We investigate this problem by proposing a generalized model of sequence generation that unifies decoding in directed and undirected models.
1 code implementation • 1 Jun 2019 • Ilia Kulikov, Jason Lee, Kyunghyun Cho
We propose a novel approach for conversation-level inference by explicitly modeling the dialogue partner and running beam search across multiple conversation turns.
no code implementations • ACL 2019 • Jiatao Gu, Yong Wang, Kyunghyun Cho, Victor O. K. Li
Zero-shot translation, translating between language pairs on which a Neural Machine Translation (NMT) system has never been trained, is an emergent property when training the system in multilingual settings.
no code implementations • 7 Jun 2019 • Yiqiu Shen, Nan Wu, Jason Phang, Jungkyu Park, Gene Kim, Linda Moy, Kyunghyun Cho, Krzysztof J. Geras
Moreover, both the global structure and local details play important roles in medical image analysis tasks.
2 code implementations • 9 Jun 2019 • Keunwoo Choi, Kyunghyun Cho
We introduce DrummerNet, a drum transcription system that is trained in an unsupervised manner.
Sound Audio and Speech Processing
no code implementations • ACL 2019 • Raphael Shu, Hideki Nakayama, Kyunghyun Cho
In this work, we attempt to obtain diverse translations by using sentence codes to condition the sentence generation.
no code implementations • 5 Jul 2019 • Owen Marschall, Kyunghyun Cho, Cristina Savin
We present a framework for compactly summarizing many recent results in efficient and/or biologically plausible online training of recurrent neural networks (RNN).
no code implementations • NeurIPS 2019 • Nishant Subramani, Samuel R. Bowman, Kyunghyun Cho
We then investigate the conditions under which a language model can be made to generate a sentence through the identification of a point in such a space and find that it is possible to recover arbitrary sentences nearly perfectly with language models and representations of moderate size without modifying any model parameters.
no code implementations • 30 Jul 2019 • Jungkyu Park, Jason Phang, Yiqiu Shen, Nan Wu, S. Gene Kim, Linda Moy, Kyunghyun Cho, Krzysztof J. Geras
Radiologists typically compare a patient's most recent breast cancer screening exam to their previous ones in making informed diagnoses.
no code implementations • 1 Aug 2019 • Thibault Févry, Jason Phang, Nan Wu, S. Gene Kim, Linda Moy, Kyunghyun Cho, Krzysztof J. Geras
We trained and evaluated a localization-based deep CNN for breast cancer screening exam classification on over 200, 000 exams (over 1, 000, 000 images).
5 code implementations • ICLR 2020 • Sean Welleck, Ilia Kulikov, Stephen Roller, Emily Dinan, Kyunghyun Cho, Jason Weston
Neural text generation is a key tool in natural language applications, but it is well known there are major problems at its core.
1 code implementation • 20 Aug 2019 • Raphael Shu, Jason Lee, Hideki Nakayama, Kyunghyun Cho
By decoding multiple initial latent variables in parallel and rescore using a teacher model, the proposed model further brings the gap down to 1. 0 BLEU point on WMT'14 En-De task with 6. 8x speedup.
2 code implementations • ICLR 2020 • William Whitney, Rajat Agarwal, Kyunghyun Cho, Abhinav Gupta
In this paper we consider self-supervised representation learning to improve sample efficiency in reinforcement learning (RL).
no code implementations • IJCNLP 2019 • Katharina Kann, Kyunghyun Cho, Samuel R. Bowman
Here, we aim to answer the following questions: Does using a development set for early stopping in the low-resource setting influence results as compared to a more realistic alternative, where the number of training epochs is tuned on development languages?
1 code implementation • 7 Sep 2019 • Changhan Wang, Kyunghyun Cho, Jiatao Gu
Representing text at the level of bytes and using the 256 byte set as vocabulary is a potential solution to this issue.
no code implementations • IJCNLP 2019 • Jason Lee, Kyunghyun Cho, Douwe Kiela
Emergent multi-agent communication protocols are very different from natural language and not easily interpretable by humans.
no code implementations • NeurIPS Workshop Neuro_AI 2019 • Owen Marschall, Kyunghyun Cho, Cristina Savin
To which extent can successful machine learning inform our understanding of biological learning?
1 code implementation • 12 Sep 2019 • Ethan Perez, Siddharth Karamcheti, Rob Fergus, Jason Weston, Douwe Kiela, Kyunghyun Cho
We propose a system that finds the strongest supporting evidence for a given answer to a question, using passage-based question-answering (QA) as a testbed.
no code implementations • 22 Sep 2019 • Phu Mon Htut, Kyunghyun Cho, Samuel R. Bowman
Latent tree learning(LTL) methods learn to parse sentences using only indirect supervision from a downstream task.
2 code implementations • ICLR 2020 • Cheolhyoung Lee, Kyunghyun Cho, Wanmo Kang
We empirically evaluate the proposed mixout and its variants on finetuning a pretrained language model on downstream tasks.
3 code implementations • 3 Oct 2019 • Edward Grefenstette, Brandon Amos, Denis Yarats, Phu Mon Htut, Artem Molchanov, Franziska Meier, Douwe Kiela, Kyunghyun Cho, Soumith Chintala
Many (but not all) approaches self-qualifying as "meta-learning" in deep learning and reinforcement learning fit a common pattern of approximating the solution to a nested optimization problem.
no code implementations • 16 Oct 2019 • Tianxing He, Jun Liu, Kyunghyun Cho, Myle Ott, Bing Liu, James Glass, Fuchun Peng
We find that mix-review effectively regularizes the finetuning process, and the forgetting problem is alleviated to some extent.
1 code implementation • 24 Oct 2019 • Cinjon Resnick, Abhinav Gupta, Jakob Foerster, Andrew M. Dai, Kyunghyun Cho
In this paper, we investigate the learning biases that affect the efficacy and compositionality of emergent languages.
3 code implementations • 31 Oct 2019 • Rodrigo Nogueira, Wei Yang, Kyunghyun Cho, Jimmy Lin
The advent of deep neural networks pre-trained via language modeling tasks has spurred a number of successful applications in natural language processing.
no code implementations • WS 2019 • Katharina Kann, Anhad Mohananey, Samuel R. Bowman, Kyunghyun Cho
Recently, neural network models which automatically infer syntactic structure from raw text have started to achieve promising results.
no code implementations • IJCNLP 2019 • Ethan Perez, Siddharth Karamcheti, Rob Fergus, Jason Weston, Douwe Kiela, Kyunghyun Cho
We propose a system that finds the strongest supporting evidence for a given answer to a question, using passage-based question-answering (QA) as a testbed.
1 code implementation • ACL 2020 • Margaret Li, Stephen Roller, Ilia Kulikov, Sean Welleck, Y-Lan Boureau, Kyunghyun Cho, Jason Weston
Generative dialogue models currently suffer from a number of problems which standard maximum likelihood training does not address.
no code implementations • 23 Jan 2020 • Rodrigo Nogueira, Zhiying Jiang, Kyunghyun Cho, Jimmy Lin
Citation recommendation systems for the scientific literature, to help authors find papers that should be cited, have the potential to speed up discoveries and uncover new routes for scientific exploration.
no code implementations • MIDL 2019 • Nan Wu, Stanisław Jastrzębski, Jungkyu Park, Linda Moy, Kyunghyun Cho, Krzysztof J. Geras
In breast cancer screening, radiologists make the diagnosis based on images that are taken from two angles.
1 code implementation • EMNLP 2020 • Sean Welleck, Ilia Kulikov, Jaedeok Kim, Richard Yuanzhe Pang, Kyunghyun Cho
Despite strong performance on a variety of tasks, neural sequence models trained with maximum likelihood have been shown to exhibit issues such as length bias and degenerate repetition.
1 code implementation • 13 Feb 2020 • Yiqiu Shen, Nan Wu, Jason Phang, Jungkyu Park, Kangning Liu, Sudarshini Tyagi, Laura Heacock, S. Gene Kim, Linda Moy, Kyunghyun Cho, Krzysztof J. Geras
In this work, we extend the globally-aware multiple instance classifier, a framework we proposed to address these unique properties of medical images.
1 code implementation • EMNLP (spnlp) 2020 • Jason Lee, Dustin Tran, Orhan Firat, Kyunghyun Cho
In this paper, by comparing several density estimators on five machine translation tasks, we find that the correlation between rankings of models based on log-likelihood and BLEU varies significantly depending on the range of the model families being compared.
no code implementations • ICLR 2020 • Stanislaw Jastrzebski, Maciej Szymczak, Stanislav Fort, Devansh Arpit, Jacek Tabor, Kyunghyun Cho, Krzysztof Geras
We argue for the existence of the "break-even" point on this trajectory, beyond which the curvature of the loss surface and noise in the gradient are implicitly regularized by SGD.
2 code implementations • EMNLP 2020 • Ethan Perez, Patrick Lewis, Wen-tau Yih, Kyunghyun Cho, Douwe Kiela
We aim to improve question answering (QA) by decomposing hard questions into simpler sub-questions that existing QA systems are capable of answering.
no code implementations • 23 Mar 2020 • Witold Oleszkiewicz, Taro Makino, Stanisław Jastrzębski, Tomasz Trzciński, Linda Moy, Kyunghyun Cho, Laura Heacock, Krzysztof J. Geras
Deep neural networks (DNNs) show promise in breast cancer screening, but their robustness to input perturbations must be better understood before they can be clinically implemented.
2 code implementations • ACL 2020 • Alex Wang, Kyunghyun Cho, Mike Lewis
QAGS is based on the intuition that if we ask questions about a summary and its source, we will receive similar answers if the summary is factually consistent with the source.
1 code implementation • 10 Apr 2020 • Edwin Zhang, Nikhil Gupta, Rodrigo Nogueira, Kyunghyun Cho, Jimmy Lin
We present the Neural Covidex, a search engine that exploits the latest neural ranking architectures to provide information access to the COVID-19 Open Research Dataset curated by the Allen Institute for AI.
1 code implementation • 23 Apr 2020 • Raphael Tang, Rodrigo Nogueira, Edwin Zhang, Nikhil Gupta, Phuong Cam, Kyunghyun Cho, Jimmy Lin
We present CovidQA, the beginnings of a question answering dataset specifically designed for COVID-19, built by hand from knowledge gathered from Kaggle's COVID-19 Open Research Dataset Challenge.
no code implementations • 28 Apr 2020 • Katharina Kann, Samuel R. Bowman, Kyunghyun Cho
We propose to cast the task of morphological inflection - mapping a lemma to an indicated inflected form - for resource-poor languages as a meta-learning problem.
1 code implementation • EAMT 2020 • António Góis, Kyunghyun Cho, André Martins
Recent research in neural machine translation has explored flexible generation orders, as an alternative to left-to-right generation.
3 code implementations • EACL 2021 • Jonas Pfeiffer, Aishwarya Kamath, Andreas Rücklé, Kyunghyun Cho, Iryna Gurevych
We show that by separating the two stages, i. e., knowledge extraction and knowledge composition, the classifier can effectively exploit the representations learned from multiple tasks in a non-destructive manner.
1 code implementation • 4 Jun 2020 • Sean Welleck, Kyunghyun Cho
Typical approaches to directly optimizing the task loss such as policy gradient and minimum risk training are based around sampling in the sequence space to obtain candidate update directions that are scored based on the loss of a single sequence.
no code implementations • ACL 2020 • Edwin Zhang, Nikhil Gupta, Rodrigo Nogueira, Kyunghyun Cho, Jimmy Lin
The Neural Covidex is a search engine that exploits the latest neural ranking architectures to provide information access to the COVID-19 Open Research Dataset (CORD-19) curated by the Allen Institute for AI.
no code implementations • WS 2020 • Abhinav Gupta, Cinjon Resnick, Jakob Foerster, Andrew Dai, Kyunghyun Cho
Our hypothesis is that there should be a specific range of model capacity and channel bandwidth that induces compositional structure in the resulting language and consequently encourages systematic generalization.
1 code implementation • EMNLP (sdp) 2020 • Edwin Zhang, Nikhil Gupta, Raphael Tang, Xiao Han, Ronak Pradeep, Kuang Lu, Yue Zhang, Rodrigo Nogueira, Kyunghyun Cho, Hui Fang, Jimmy Lin
We present Covidex, a search engine that exploits the latest neural ranking models to provide information access to the COVID-19 Open Research Dataset curated by the Allen Institute for AI.
8 code implementations • EMNLP 2020 • Jonas Pfeiffer, Andreas Rücklé, Clifton Poth, Aishwarya Kamath, Ivan Vulić, Sebastian Ruder, Kyunghyun Cho, Iryna Gurevych
We propose AdapterHub, a framework that allows dynamic "stitching-in" of pre-trained adapters for different tasks and languages.
1 code implementation • EMNLP (MRL) 2021 • Houda Alberts, Teresa Huang, Yash Deshpande, Yibo Liu, Kyunghyun Cho, Clara Vania, Iacer Calixto
We also release a neural multi-modal retrieval model that can use images or sentences as inputs and retrieves entities in the KG.
1 code implementation • 31 Aug 2020 • William Falcon, Kyunghyun Cho
Contrastive self-supervised learning (CSL) is an approach to learn useful representations by solving a pretext task that selects and compares anchor, negative and positive (APN) features from an unlabeled dataset.
Ranked #26 on Image Classification on STL-10
1 code implementation • Asian Chapter of the Association for Computational Linguistics 2020 • Moin Nadeem, Tianxing He, Kyunghyun Cho, James Glass
On the other hand, we find that the set of sampling algorithms that satisfies these properties performs on par with the existing sampling algorithms.
1 code implementation • 15 Sep 2020 • William F. Whitney, Min Jae Song, David Brandfonbrener, Jaan Altosaar, Kyunghyun Cho
We consider the problem of evaluating representations of data for use in solving a downstream task.
1 code implementation • EMNLP 2020 • Jason Lee, Raphael Shu, Kyunghyun Cho
Given a continuous latent variable model for machine translation (Shu et al., 2020), we train an inference network to approximate the gradient of the marginal log probability of the target sentence, using only the latent variable as input.
no code implementations • ICLR 2021 • Shuhei Kurita, Kyunghyun Cho
Vision-and-language navigation (VLN) is a task in which an agent is embodied in a realistic 3D environment and follows an instruction to reach the goal node.
no code implementations • 19 Sep 2020 • Nan Wu, Zhe Huang, Yiqiu Shen, Jungkyu Park, Jason Phang, Taro Makino, S. Gene Kim, Kyunghyun Cho, Laura Heacock, Linda Moy, Krzysztof J. Geras
Breast cancer is the most common cancer in women, and hundreds of thousands of unnecessary biopsies are done around the world at a tremendous cost.
1 code implementation • EMNLP 2020 • Nathan Ng, Kyunghyun Cho, Marzyeh Ghassemi
Models that perform well on a training domain often fail to generalize to out-of-domain (OOD) examples.
1 code implementation • ACL 2021 • Gyuwan Kim, Kyunghyun Cho
We then conduct a multi-objective evolutionary search to find a length configuration that maximizes the accuracy and minimizes the efficiency metric under any given computational budget.
1 code implementation • COLING 2020 • Jon Ander Campos, Kyunghyun Cho, Arantxa Otegi, Aitor Soroa, Gorka Azkune, Eneko Agirre
The interaction of conversational systems with users poses an exciting opportunity for improving them after deployment, but little evidence has been provided of its feasibility.
no code implementations • 11 Nov 2020 • Cinjon Resnick, Or Litany, Hugo Larochelle, Joan Bruna, Kyunghyun Cho
We propose a self-supervised framework to learn scene representations from video that are automatically delineated into objects and background.
1 code implementation • 28 Nov 2020 • Taro Makino, Stanislaw Jastrzebski, Witold Oleszkiewicz, Celin Chacko, Robin Ehrenpreis, Naziya Samreen, Chloe Chhor, Eric Kim, Jiyon Lee, Kristine Pysarenko, Beatriu Reig, Hildegard Toth, Divya Awal, Linda Du, Alice Kim, James Park, Daniel K. Sodickson, Laura Heacock, Linda Moy, Kyunghyun Cho, Krzysztof J. Geras
We compare the two with respect to their robustness to Gaussian low-pass filtering, performing a subgroup analysis on microcalcifications and soft tissue lesions.
no code implementations • 3 Dec 2020 • Elham J. Barezi, Iacer Calixto, Kyunghyun Cho, Pascale Fung
These tasks are hard because the label space is usually (i) very large, e. g. thousands or millions of labels, (ii) very sparse, i. e. very few labels apply to each input document, and (iii) highly correlated, meaning that the existence of one label changes the likelihood of predicting all other labels.
no code implementations • 28 Dec 2020 • Stanislaw Jastrzebski, Devansh Arpit, Oliver Astrand, Giancarlo Kerg, Huan Wang, Caiming Xiong, Richard Socher, Kyunghyun Cho, Krzysztof Geras
The early phase of training a deep neural network has a dramatic effect on the local curvature of the loss function.
no code implementations • 23 Jan 2021 • William F. Whitney, Michael Bloesch, Jost Tobias Springenberg, Abbas Abdolmaleki, Kyunghyun Cho, Martin Riedmiller
This causes BBE to be actively detrimental to policy learning in many control tasks.
no code implementations • 1 Feb 2021 • Cinjon Resnick, Or Litany, Cosmas Heiß, Hugo Larochelle, Joan Bruna, Kyunghyun Cho
We propose a self-supervised framework to learn scene representations from video that are automatically delineated into background, characters, and their animations.
1 code implementation • 15 Feb 2021 • Daniel Jiwoong Im, Cristina Savin, Kyunghyun Cho
Conventional hyperparameter optimization methods are computationally intensive and hard to generalize to scenarios that require dynamically adapting hyperparameters, such as life-long learning.
1 code implementation • ICLR Workshop Neural_Compression 2021 • Ethan Perez, Douwe Kiela, Kyunghyun Cho
We introduce a method to determine if a certain capability helps to achieve an accurate model of given data.
1 code implementation • 24 Mar 2021 • Sean Welleck, Jiacheng Liu, Ronan Le Bras, Hannaneh Hajishirzi, Yejin Choi, Kyunghyun Cho
Understanding and creating mathematics using natural mathematical language - the mixture of symbolic and natural language used by humans - is a challenging and important problem for driving progress in machine learning.