Intent classification (IC) plays an important role in task-oriented dialogue systems as it identifies user intents from given utterances.
Conventional text style transfer approaches for natural language focus on sentence-level style transfer without considering contextual information, and the style is described with attributes (e. g., formality).
When upgrading neural models to a newer version, new errors that were not encountered in the legacy version can be introduced, known as regression errors.
This method interpolates between the weights of the old and new model and we show in extensive experiments that it reduces negative flips without sacrificing the improved accuracy of the new model.
The conversational model interacts with the environment by generating and executing programs triggering a set of pre-defined APIs.
Label semantic aware systems have leveraged this information for improved text classification performance during fine-tuning and prediction.
First, we measure and analyze model update regression in different model update settings.
Pre-trained language models have been recently shown to benefit task-oriented dialogue (TOD) systems.
At the generation time, the model constructs the semantic parse tree by recursively inserting the predicted non-terminal labels at the predicted positions until termination.
In this paper, we offer a preliminary investigation into the task of in-image machine translation: transforming an image containing text in one language into an image containing the same text in another language.
Neural machine translation (NMT) has arguably achieved human level parity when trained and evaluated at the sentence-level.
We investigate this problem by proposing a generalized model of sequence generation that unifies decoding in directed and undirected models.
Conventional conformation generation methods minimize hand-designed molecular force field energy functions that are often not well correlated with the true energy function of a molecule observed in nature.
We propose a conditional non-autoregressive neural sequence model based on iterative refinement.
Ranked #5 on Machine Translation on IWSLT2015 German-English
As this policy does not require any optimization, it allows us to investigate the underlying difficulty of a task without being distracted by optimization difficulty of a learning algorithm.
In this work, we propose to apply trust region optimization to deep reinforcement learning using a recently proposed Kronecker-factored approximation to the curvature.
Motivated by the recent progress in generative models, we introduce a model that generates images from natural language descriptions.
We propose a new way of incorporating temporal information present in videos into Spatial Convolutional Neural Networks (ConvNets) trained on images, that avoids training Spatio-Temporal ConvNets from scratch.
We further evaluate the representations by finetuning them for a supervised learning problem - human action recognition on the UCF-101 and HMDB-51 datasets.