Search Results for author: Jean-Marc Andreoli

Found 7 papers, 4 papers with code

BQ-NCO: Bisimulation Quotienting for Efficient Neural Combinatorial Optimization

2 code implementations NeurIPS 2023 Darko Drakulic, Sofia Michel, Florian Mai, Arnaud Sors, Jean-Marc Andreoli

In this paper, we present a novel formulation of Combinatorial Optimization Problems (COPs) as Markov Decision Processes (MDPs) that effectively leverages common symmetries of COPs to improve out-of-distribution robustness.

Combinatorial Optimization Out-of-Distribution Generalization

On the Generalization of Neural Combinatorial Optimization Heuristics

no code implementations1 Jun 2022 Sahil Manchanda, Sofia Michel, Darko Drakulic, Jean-Marc Andreoli

Neural Combinatorial Optimization approaches have recently leveraged the expressiveness and flexibility of deep neural networks to learn efficient heuristics for hard Combinatorial Optimization (CO) problems.

Combinatorial Optimization Meta-Learning

Structured Time Series Prediction without Structural Prior

1 code implementation7 Feb 2022 Darko Drakulic, Jean-Marc Andreoli

Time series prediction is a widespread and well studied problem with applications in many domains (medical, geoscience, network analysis, finance, econometry etc.).

Ranked #9 on Traffic Prediction on PeMS07 (using extra training data)

Domain Adaptation Time Series +2

Distributional Reinforcement Learning for Energy-Based Sequential Models

1 code implementation18 Dec 2019 Tetiana Parshakova, Jean-Marc Andreoli, Marc Dymetman

Global Autoregressive Models (GAMs) are a recent proposal [Parshakova et al., CoNLL 2019] for exploiting global properties of sequences for data-efficient learning of seq2seq models.

Distributional Reinforcement Learning reinforcement-learning +1

Global Autoregressive Models for Data-Efficient Sequence Learning

1 code implementation CONLL 2019 Tetiana Parshakova, Jean-Marc Andreoli, Marc Dymetman

In the second step, we use this GAM to train (by distillation) a second autoregressive model that approximates the \emph{normalized} distribution associated with the GAM, and can be used for fast inference and evaluation.

Language Modelling Small Data Image Classification

Convolution, attention and structure embedding

no code implementations3 May 2019 Jean-Marc Andreoli

Deep neural networks are composed of layers of parametrised linear operations intertwined with non linear activations.

Time Series Time Series Analysis

A conjugate prior for the Dirichlet distribution

no code implementations13 Nov 2018 Jean-Marc Andreoli

This note investigates a conjugate class for the Dirichlet distribution class in the exponential family.

Cannot find the paper you are looking for? You can Submit a new open access paper.