Search Results for author: Marc Dymetman

Found 30 papers, 3 papers with code

Sampling from Discrete Energy-Based Models with Quality/Efficiency Trade-offs

no code implementations10 Dec 2021 Bryan Eikema, Germán Kruszewski, Hady Elsahar, Marc Dymetman

We show that we can sample from such EBMs with arbitrary precision at the cost of sampling efficiency.

Paraphrase Generation

On Reward Maximization and Distribution Matching for Fine-Tuning Language Models

no code implementations29 Sep 2021 Tomasz Korbak, Hady Elsahar, Germán Kruszewski, Marc Dymetman

The availability of large pre-trained models is changing the landscape of Machine Learning research and practice, moving from a "training from scratch" to a "fine-tuning'' paradigm.

Language Modelling Text Generation

Energy-Based Models for Code Generation under Compilability Constraints

no code implementations9 Jun 2021 Tomasz Korbak, Hady Elsahar, Marc Dymetman, Germán Kruszewski

Neural language models can be successfully trained on source code, leading to applications such as code completion.

Code Completion Code Generation

A Distributional Approach to Controlled Text Generation

1 code implementation ICLR 2021 Muhammad Khalifa, Hady Elsahar, Marc Dymetman

From that optimal representation we then train a target controlled Autoregressive LM through an adaptive distributional variant of Policy Gradient.

Text Generation

Distributional Reinforcement Learning for Energy-Based Sequential Models

no code implementations18 Dec 2019 Tetiana Parshakova, Jean-Marc Andreoli, Marc Dymetman

Global Autoregressive Models (GAMs) are a recent proposal [Parshakova et al., CoNLL 2019] for exploiting global properties of sequences for data-efficient learning of seq2seq models.

Distributional Reinforcement Learning reinforcement-learning

Character-based NMT with Transformer

no code implementations12 Nov 2019 Rohit Gupta, Laurent Besacier, Marc Dymetman, Matthias Gallé

Character-based translation has several appealing advantages, but its performance is in general worse than a carefully tuned BPE baseline.

Translation

Machine Translation of Restaurant Reviews: New Corpus for Domain Adaptation and Robustness

no code implementations WS 2019 Alexandre Bérard, Ioan Calapodescu, Marc Dymetman, Claude Roux, Jean-Luc Meunier, Vassilina Nikoulina

We share a French-English parallel corpus of Foursquare restaurant reviews (https://europe. naverlabs. com/research/natural-language-processing/machine-translation-of-restaurant-reviews), and define a new task to encourage research on Neural Machine Translation robustness and domain adaptation, in a real-world scenario where better-quality MT would be greatly beneficial.

Domain Adaptation Machine Translation +2

Global Autoregressive Models for Data-Efficient Sequence Learning

no code implementations CONLL 2019 Tetiana Parshakova, Jean-Marc Andreoli, Marc Dymetman

In the second step, we use this GAM to train (by distillation) a second autoregressive model that approximates the \emph{normalized} distribution associated with the GAM, and can be used for fast inference and evaluation.

Language Modelling Small Data Image Classification

Moment Matching Training for Neural Machine Translation: A Preliminary Study

no code implementations24 Dec 2018 Cong Duy Vu Hoang, Ioan Calapodescu, Marc Dymetman

In previous works, neural sequence models have been shown to improve significantly if external prior knowledge can be provided, for instance by allowing the model to access the embeddings of explicit features during both training and inference.

Machine Translation Translation

Symbolic Priors for RNN-based Semantic Parsing

1 code implementation20 Sep 2018 Chunyang Xiao, Marc Dymetman, Claire Gardent

Seq2seq models based on Recurrent Neural Networks (RNNs) have recently received a lot of attention in the domain of Semantic Parsing for Question Answering.

Question Answering Semantic Parsing

A surprisingly effective out-of-the-box char2char model on the E2E NLG Challenge dataset

1 code implementation WS 2017 Shubham Agarwal, Marc Dymetman

We train a char2char model on the E2E NLG Challenge data, by exploiting {``}out-of-the-box{''} the recently released tfseq2seq framework, using some of the standard options offered by this tool.

Data-to-Text Generation

Natural Language Generation through Character-based RNNs with Finite-state Prior Knowledge

no code implementations COLING 2016 Raghav Goyal, Marc Dymetman, Eric Gaussier

Recently Wen et al. (2015) have proposed a Recurrent Neural Network (RNN) approach to the generation of utterances from dialog acts, and shown that although their model requires less effort to develop than a rule-based system, it is able to improve certain aspects of the utterances, in particular their naturalness.

Language Modelling Machine Translation +2

Log-Linear RNNs: Towards Recurrent Neural Networks with Flexible Prior Knowledge

no code implementations8 Jul 2016 Marc Dymetman, Chunyang Xiao

We introduce LL-RNNs (Log-Linear RNNs), an extension of Recurrent Neural Networks that replaces the softmax output layer by a log-linear output layer, of which the softmax is a special case.

Language Modelling Representation Learning

LSTM-based Mixture-of-Experts for Knowledge-Aware Dialogues

no code implementations WS 2016 Phong Le, Marc Dymetman, Jean-Michel Renders

We introduce an LSTM-based method for dynamically integrating several word-prediction experts to obtain a conditional language model which can be good simultaneously at several subtasks.

Language Modelling Question Answering

Assisting Composition of Email Responses: a Topic Prediction Approach

no code implementations7 Oct 2015 Spandana Gella, Marc Dymetman, Jean Michel Renders, Sriram Venkatapathy

The experimental results on a large email collection from a contact center in the tele- com domain show that the proposed ap- proach is effective in predicting the best topic of the agent's next sentence.

Adaptation par enrichissement terminologique en traduction automatique statistique fond\'ee sur la g\'en\'eration et le filtrage de bi-segments virtuels

no code implementations JEPTALNRECITAL 2015 Christophe Servan, Marc Dymetman

Nous pr{\'e}sentons des travaux pr{\'e}liminaires sur une approche permettant d{'}ajouter des termes bilingues {\`a} un syst{\`e}me de Traduction Automatique Statistique (TAS) {\`a} base de segments.

Cannot find the paper you are looking for? You can Submit a new open access paper.