We present Perceive-Represent-Generate (PRG), a novel three-stage framework that maps perceptual information of different modalities (e. g., visual or sound), corresponding to a sequence of instructions, to an adequate sequence of movements to be executed by a robot.
Moreover, since we not know in advance which query strategy will be the most adequate for a certain language pair and set of Machine Translation models, we propose to dynamically combine multiple strategies using prediction with expert advice.
In this paper, we present a novel Bayesian online prediction algorithm for the problem setting of ad hoc teamwork under partial observability (ATPO), which enables on-the-fly collaboration with unknown teammates performing an unknown task without needing a pre-coordination protocol.
Second, we provide a novel four-state MDP that highlights the impact of the data distribution in the performance of a Q-learning algorithm with function approximation, both in online and offline settings.
In Machine Translation, assessing the quality of a large amount of automatic translations can be challenging.
Our methodology addresses the lack of standardization in the literature that renders the comparison of approaches in different works meaningless, due to differences in metrics, environments, and even experimental design and methodology.
We approach all the subtasks by applying a graph clustering algorithm on contextualized embedding representations of the verbs and arguments.