Search Results for author: Xingdi Yuan

Found 37 papers, 21 papers with code

Augmenting Autotelic Agents with Large Language Models

no code implementations21 May 2023 Cédric Colas, Laetitia Teodorescu, Pierre-Yves Oudeyer, Xingdi Yuan, Marc-Alexandre Côté

Without relying on any hand-coded goal representations, reward functions or curriculum, we show that LMA3 agents learn to master a large diversity of skills in a task-agnostic text-based environment.

Common Sense Reasoning Language Modelling

It Takes Two to Tango: Navigating Conceptualizations of NLP Tasks and Measurements of Performance

no code implementations15 May 2023 Arjun Subramonian, Xingdi Yuan, Hal Daumé III, Su Lin Blodgett

Progress in NLP is increasingly measured through benchmarks; hence, contextualizing progress requires understanding when and why practitioners may disagree about the validity of benchmarks.

coreference-resolution Question Answering

Supporting Qualitative Analysis with Large Language Models: Combining Codebook with GPT-3 for Deductive Coding

no code implementations17 Apr 2023 Ziang Xiao, Xingdi Yuan, Q. Vera Liao, Rania Abdelghani, Pierre-Yves Oudeyer

In this study, we explored the use of large language models (LLMs) in supporting deductive coding, a major category of qualitative analysis where researchers use pre-determined codebooks to label the data into a fixed set of codes.

A Song of Ice and Fire: Analyzing Textual Autotelic Agents in ScienceWorld

no code implementations10 Feb 2023 Laetitia Teodorescu, Xingdi Yuan, Marc-Alexandre Côté, Pierre-Yves Oudeyer

We show the importance of selectivity from the social peer's feedback; that experience replay needs to over-sample examples of rare goals; and that following self-generated goal sequences where the agent's competence is intermediate leads to significant improvements in final performance.

GPT-3-driven pedagogical agents for training children's curious question-asking skills

no code implementations25 Nov 2022 Rania Abdelghani, Yen-Hsiang Wang, Xingdi Yuan, Tong Wang, Pauline Lucas, Hélène Sauzéon, Pierre-Yves Oudeyer

In this context, we propose to leverage advances in the natural language processing field (NLP) and investigate the efficiency of using a large language model (LLM) for automating the production of the pedagogical content of a curious question-asking (QA) training.

Language Modelling Large Language Model

General-to-Specific Transfer Labeling for Domain Adaptable Keyphrase Generation

1 code implementation20 Aug 2022 Rui Meng, Tong Wang, Xingdi Yuan, Yingbo Zhou, Daqing He

Finally, we fine-tune the model with limited data with true labels to fully adapt it to the target domain.

Keyphrase Generation

Asking for Knowledge: Training RL Agents to Query External Knowledge Using Language

no code implementations12 May 2022 Iou-Jen Liu, Xingdi Yuan, Marc-Alexandre Côté, Pierre-Yves Oudeyer, Alexander G. Schwing

In order to study how agents can be taught to query external knowledge via language, we first introduce two new environments: the grid-world-based Q-BabyAI and the text-based Q-TextWorld.

Interactive Machine Comprehension with Dynamic Knowledge Graphs

1 code implementation EMNLP 2021 Xingdi Yuan

Interactive machine reading comprehension (iMRC) is machine comprehension tasks where knowledge sources are partially observable.

Knowledge Graphs Machine Reading Comprehension

BUTLER: Building Understanding in TextWorld via Language for Embodied Reasoning

no code implementations ICLR 2021 Mohit Shridhar, Xingdi Yuan, Marc-Alexandre Cote, Yonatan Bisk, Adam Trischler, Matthew Hausknecht

ALFWorld enables the creation of a new BUTLER agent whose abstract knowledge, learned in TextWorld, corresponds directly to concrete, visually grounded actions.

Scene Understanding

ALFWorld: Aligning Text and Embodied Environments for Interactive Learning

1 code implementation8 Oct 2020 Mohit Shridhar, Xingdi Yuan, Marc-Alexandre Côté, Yonatan Bisk, Adam Trischler, Matthew Hausknecht

ALFWorld enables the creation of a new BUTLER agent whose abstract knowledge, learned in TextWorld, corresponds directly to concrete, visually grounded actions.

Natural Language Visual Grounding Scene Understanding

An Empirical Study on Neural Keyphrase Generation

1 code implementation NAACL 2021 Rui Meng, Xingdi Yuan, Tong Wang, Sanqiang Zhao, Adam Trischler, Daqing He

Recent years have seen a flourishing of neural keyphrase generation (KPG) works, including the release of several large-scale datasets and a host of new models to tackle them.

Keyphrase Generation

Graph Policy Network for Transferable Active Learning on Graphs

1 code implementation NeurIPS 2020 Shengding Hu, Zheng Xiong, Meng Qu, Xingdi Yuan, Marc-Alexandre Côté, Zhiyuan Liu, Jian Tang

Graph neural networks (GNNs) have been attracting increasing popularity due to their simplicity and effectiveness in a variety of fields.

Active Learning

Role-Wise Data Augmentation for Knowledge Distillation

1 code implementation ICLR 2020 Jie Fu, Xue Geng, Zhijian Duan, Bohan Zhuang, Xingdi Yuan, Adam Trischler, Jie Lin, Chris Pal, Hao Dong

To our knowledge, existing methods overlook the fact that although the student absorbs extra knowledge from the teacher, both models share the same input data -- and this data is the only medium by which the teacher's knowledge can be demonstrated.

Data Augmentation Knowledge Distillation

Interactive Fiction Games: A Colossal Adventure

2 code implementations11 Sep 2019 Matthew Hausknecht, Prithviraj Ammanabrolu, Marc-Alexandre Côté, Xingdi Yuan

A hallmark of human intelligence is the ability to understand and communicate with language.

Does Order Matter? An Empirical Study on Generating Multiple Keyphrases as a Sequence

1 code implementation9 Sep 2019 Rui Meng, Xingdi Yuan, Tong Wang, Peter Brusilovsky, Adam Trischler, Daqing He

Recently, concatenating multiple keyphrases as a target sequence has been proposed as a new learning paradigm for keyphrase generation.

Keyphrase Generation

Interactive Machine Comprehension with Information Seeking Agents

1 code implementation ACL 2020 Xingdi Yuan, Jie Fu, Marc-Alexandre Cote, Yi Tay, Christopher Pal, Adam Trischler

Existing machine reading comprehension (MRC) models do not scale effectively to real-world applications like web-level information retrieval and question answering (QA).

Decision Making Information Retrieval +3

Towards Solving Text-based Games by Producing Adaptive Action Spaces

2 code implementations3 Dec 2018 Ruo Yu Tao, Marc-Alexandre Côté, Xingdi Yuan, Layla El Asri

To solve a text-based game, an agent needs to formulate valid text commands for a given context and find the ones that lead to success.

reinforcement-learning Reinforcement Learning (RL) +1

Building Dynamic Knowledge Graphs from Text using Machine Reading Comprehension

no code implementations ICLR 2019 Rajarshi Das, Tsendsuren Munkhdalai, Xingdi Yuan, Adam Trischler, Andrew McCallum

We harness and extend a recently proposed machine reading comprehension (MRC) model to query for entity states, since these states are generally communicated in spans of text and MRC models perform well in extracting entity-centric spans.

Knowledge Graphs Machine Reading Comprehension +2

Counting to Explore and Generalize in Text-based Games

2 code implementations29 Jun 2018 Xingdi Yuan, Marc-Alexandre Côté, Alessandro Sordoni, Romain Laroche, Remi Tachet des Combes, Matthew Hausknecht, Adam Trischler

We propose a recurrent RL agent with an episodic exploration mechanism that helps discovering good policies in text-based game environments.

text-based games

Rapid Adaptation with Conditionally Shifted Neurons

no code implementations ICML 2018 Tsendsuren Munkhdalai, Xingdi Yuan, Soroush Mehri, Adam Trischler

We describe a mechanism by which artificial neural networks can learn rapid adaptation - the ability to adapt on the fly, with little data, to new tasks - that we call conditionally shifted neurons.

Few-Shot Image Classification

A Joint Model for Question Answering and Question Generation

no code implementations5 Jun 2017 Tong Wang, Xingdi Yuan, Adam Trischler

We propose a generative machine comprehension model that learns jointly to ask and answer questions based on documents.

Question Answering Question Generation +2

A Parallel-Hierarchical Model for Machine Comprehension on Sparse Data

1 code implementation ACL 2016 Adam Trischler, Zheng Ye, Xingdi Yuan, Jing He, Phillip Bachman, Kaheer Suleman

The parallel hierarchy enables our model to compare the passage, question, and answer from a variety of trainable perspectives, as opposed to using a manually designed, rigid feature set.

Question Answering Reading Comprehension

Cannot find the paper you are looking for? You can Submit a new open access paper.