We introduce Language-Informed Latent Actions (LILA), a framework for learning natural language interfaces in the context of human-robot collaboration.
In many applications of machine learning (ML), updates are performed with the goal of enhancing model performance.
The reliability of machine learning systems critically assumes that the associations between features and labels remain similar between training and test distributions.
Because training artificial neural networks from scratch is similar to showing novel objects to humans, we seek to understand the factors influencing the tolerance of CNNs to spatial transformations.
Machine learning models (e. g., speech recognizers) are usually trained to minimize average loss, which results in representation disparity---minority groups (e. g., non-native speakers) contribute less to the training objective and thus tend to suffer higher loss.