In this work, we propose an end-to-end deep learning model based on multi-layer perceptron and long short-term memory layers (LSTM) to predict the RUL.
The performance of the proposed approach is consistent across these languages and it is comparable to the state-of-the-art results in English.
In this paper we exploit cross-lingual models to enable dialogue act recognition for specific tasks with a small number of annotations.
Work on summarization has explored both reinforcement learning (RL) optimization using ROUGE as a reward and syntax-aware models, such as models those input is enriched with part-of-speech (POS)-tags and dependency information.
We study in this work the importance of depth in convolutional models for text classification, either when character or word inputs are considered.
This work proposes a new confidence measure for evaluating text-to-speech alignment systems outputs, which is a key component for many applications, such as semi-automatic corpus anonymization, lips syncing, film dubbing, corpus preparation for speech synthesis and speech recognition acoustic models training.