Minecraft
79 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Minecraft
Libraries
Use these libraries to find Minecraft models and implementationsMost implemented papers
Mastering Diverse Domains through World Models
Developing a general algorithm that learns to solve tasks across a wide range of applications has been a fundamental challenge in artificial intelligence.
Teacher-Student Curriculum Learning
We propose Teacher-Student Curriculum Learning (TSCL), a framework for automatic curriculum learning, where the Student tries to learn a complex task and the Teacher automatically chooses subtasks from a given set for the Student to train on.
CraftAssist: A Framework for Dialogue-enabled Interactive Agents
This paper describes an implementation of a bot assistant in Minecraft, and the tools and platform allowing players to interact with the bot and to record those interactions.
Sample Efficient Reinforcement Learning through Learning from Demonstrations in Minecraft
Sample inefficiency of deep reinforcement learning methods is a major obstacle for their use in real-world applications.
Video PreTraining (VPT): Learning to Act by Watching Unlabeled Online Videos
Pretraining on noisy, internet-scale datasets has been heavily studied as a technique for training models with broad, general capabilities for text, images, and other modalities.
Deep Recurrent Q-Learning vs Deep Q-Learning on a simple Partially Observable Markov Decision Process with Minecraft
Deep Q-Learning has been successfully applied to a wide variety of tasks in the past several years.
Clockwork Variational Autoencoders
We introduce the Clockwork VAE (CW-VAE), a video prediction model that leverages a hierarchy of latent sequences, where higher levels tick at slower intervals.
MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge
Autonomous agents have made great strides in specialist domains like Atari games and Go.
NovelCraft: A Dataset for Novelty Detection and Discovery in Open Worlds
In order for artificial agents to successfully perform tasks in changing environments, they must be able to both detect and adapt to novelty.
Open-World Multi-Task Control Through Goal-Aware Representation Learning and Adaptive Horizon Prediction
We study the problem of learning goal-conditioned policies in Minecraft, a popular, widely accessible yet challenging open-ended environment for developing human-level multi-task agents.