Search Results for author: Danielle Rothermel

Found 5 papers, 4 papers with code

Dungeons and Data: A Large-Scale NetHack Dataset

1 code implementation1 Nov 2022 Eric Hambro, Roberta Raileanu, Danielle Rothermel, Vegard Mella, Tim Rocktäschel, Heinrich Küttler, Naila Murray

Recent breakthroughs in the development of agents to solve challenging sequential decision making problems such as Go, StarCraft, or DOTA, have relied on both simulated environments and large-scale datasets.

Decision Making NetHack +2

moolib: A Platform for Distributed RL

1 code implementation26 Jan 2022 Vegard Mella, Eric Hambro, Danielle Rothermel, Heinrich Küttler

Together with the moolib library, we present example user code which shows how moolib’s components can be used to implement common reinforcement learning agents as a simple but scalable distributed network of homogeneous peers.

reinforcement-learning Reinforcement Learning (RL)

Don't Sweep your Learning Rate under the Rug: A Closer Look at Cross-modal Transfer of Pretrained Transformers

no code implementations26 Jul 2021 Danielle Rothermel, Margaret Li, Tim Rocktäschel, Jakob Foerster

After carefully redesigning the empirical setup, we find that when tuning learning rates properly, pretrained transformers do outperform or match training from scratch in all of our tasks, but only as long as the entire model is finetuned.

Why Build an Assistant in Minecraft?

1 code implementation22 Jul 2019 Arthur Szlam, Jonathan Gray, Kavya Srinet, Yacine Jernite, Armand Joulin, Gabriel Synnaeve, Douwe Kiela, Haonan Yu, Zhuoyuan Chen, Siddharth Goyal, Demi Guo, Danielle Rothermel, C. Lawrence Zitnick, Jason Weston

In this document we describe a rationale for a research program aimed at building an open "assistant" in the game Minecraft, in order to make progress on the problems of natural language understanding and learning from dialogue.

Natural Language Understanding

Cannot find the paper you are looking for? You can Submit a new open access paper.