In the context of multi-step reasoning, language models (LMs) probabilities are often miscalibrated -- solutions with high probabilities are not always correct.
Planning is an important capability of artificial agents that perform long-horizon tasks in real-world environments.
This work explores the problem of generating task graphs of real-world activities.
Real-world tasks consist of multiple inter-dependent subtasks (e. g., a dirty pan needs to be washed before it can be used for cooking).
Recently, Language Models (LMs) instruction-tuned on multiple tasks, also known as multitask-prompted fine-tuning (MT), have shown the capability to generalize to unseen tasks.
Pretrained Language Models (LMs) memorize a vast amount of knowledge during initial pretraining, including information that may violate the privacy of personal lives and identities.
To alleviate the need for a large number of labeled question-document pairs for retriever training, we propose PromptRank, which relies on large language models prompting for multi-hop path reranking.
In the simplest setting, we append a token to an input sequence which represents the particular task to be undertaken, and show that the embedding of this token can be optimized on the fly given few labeled examples.
First, we show that strong reading comprehension models pre-trained on large unlabeled data can be used to generalize to unseen entities.
In this work we propose a simple and efficient framework for learning sentence representations from unlabelled data.
Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal.