Grounded language learning

23 papers with code • 0 benchmarks • 1 datasets

Acquire the meaning of language in situated environments.

Datasets


Most implemented papers

BabyAI: A Platform to Study the Sample Efficiency of Grounded Language Learning

mila-iqia/babyai ICLR 2019

Allowing humans to interactively train artificial agents to understand language instructions is desirable for both practical and scientific reasons, but given the poor data efficiency of the current learning methods, this goal may require substantial research efforts.

Align before Fuse: Vision and Language Representation Learning with Momentum Distillation

salesforce/lavis NeurIPS 2021

Most existing methods employ a transformer-based multimodal encoder to jointly model visual tokens (region-based image features) and word tokens.

Pragmatically Informative Text Generation

sIncerass/prag_generation NAACL 2019

We improve the informativeness of models for conditional text generation using techniques from computational pragmatics.

Visually Grounded Continual Learning of Compositional Phrases

INK-USC/VG-CCL EMNLP 2020

To study this human-like language acquisition ability, we present VisCOLL, a visually grounded language learning task, which simulates the continual acquisition of compositional phrases from streaming visual scenes.

Grounded Language Learning Fast and Slow

deepmind/lab ICLR 2021

Recent work has shown that large text-based neural language models, trained with conventional supervised learning objectives, acquire a surprising propensity for few- and one-shot learning.

Grounded Language Learning in a Simulated 3D World

SophiaAr/OpenAI-final-project 20 Jun 2017

Trained via a combination of reinforcement and unsupervised learning, and beginning with minimal prior knowledge, the agent learns to relate linguistic symbols to emergent perceptual representations of its physical surroundings and to pertinent sequences of actions.

Interactive Language Acquisition with One-shot Visual Concept Learning through a Conversational Game

PaddlePaddle/XWorld ACL 2018

Building intelligent agents that can communicate with and learn from humans in natural language is of great value.

A new dataset and model for learning to understand navigational instructions

ozanarkancan/SAILx 21 May 2018

Our goal is to develop a model that can learn to follow new instructions given prior instruction-perception-action examples.

Lessons learned in multilingual grounded language learning

kadarakos/mulisera CONLL 2018

Recent work has shown how to learn better visual-semantic embeddings by leveraging image descriptions in more than one language.

Learning Latent Semantic Annotations for Grounding Natural Language to Structured Data

hiaoxui/D2T-Grounding EMNLP 2018

Previous work on grounded language learning did not fully capture the semantics underlying the correspondences between structured world state representations and texts, especially those between numerical values and lexical terms.