Search Results for author: Corentin Dancette

Found 11 papers, 10 papers with code

Beyond Task Performance: Evaluating and Reducing the Flaws of Large Multimodal Models with In-Context Learning

1 code implementation1 Oct 2023 Mustafa Shukor, Alexandre Rame, Corentin Dancette, Matthieu Cord

Based on our ICL study, (3) we push ICL further and propose new multimodal ICL variants such as; Multitask-ICL, Chain-of-Hindsight-ICL, and Self-Correcting-ICL.

In-Context Learning Instruction Following +1

UnIVAL: Unified Model for Image, Video, Audio and Language Tasks

1 code implementation30 Jul 2023 Mustafa Shukor, Corentin Dancette, Alexandre Rame, Matthieu Cord

Our model is efficiently pretrained on many tasks, based on task balancing and multimodal curriculum learning.

Out-of-Distribution Generalization

eP-ALM: Efficient Perceptual Augmentation of Language Models

1 code implementation ICCV 2023 Mustafa Shukor, Corentin Dancette, Matthieu Cord

In this work, we propose to rather direct effort to efficient adaptations of existing models, and propose to augment Language Models with perception.

In-Context Learning Visual Question Answering (VQA)

Dynamic Query Selection for Fast Visual Perceiver

no code implementations22 May 2022 Corentin Dancette, Matthieu Cord

Transformers have been matching deep convolutional networks for vision architectures in recent works.

Fishr: Invariant Gradient Variances for Out-of-Distribution Generalization

2 code implementations7 Sep 2021 Alexandre Rame, Corentin Dancette, Matthieu Cord

In this paper, we introduce a new regularization - named Fishr - that enforces domain invariance in the space of the gradients of the loss: specifically, the domain-level variances of gradients are matched across training domains.

Domain Generalization Out-of-Distribution Generalization

Overcoming Statistical Shortcuts for Open-ended Visual Counting

1 code implementation17 Jun 2020 Corentin Dancette, Remi Cadene, Xinlei Chen, Matthieu Cord

First, we propose the Modifying Count Distribution (MCD) protocol, which penalizes models that over-rely on statistical shortcuts.

Sampling strategies in Siamese Networks for unsupervised speech representation learning

2 code implementations30 Apr 2018 Rachid Riad, Corentin Dancette, Julien Karadayi, Neil Zeghidour, Thomas Schatz, Emmanuel Dupoux

We apply these results to pairs of words discovered using an unsupervised algorithm and show an improvement on state-of-the-art in unsupervised representation learning using siamese networks.

Representation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.