Many medical datasets have recently been created for medical image segmentation tasks, and it is natural to question whether we can use them to sequentially train a single model that (1) performs better on all these datasets, and (2) generalizes well and transfers better to the unknown target site domain.
We propose a novel system that takes as an input body movements of a musician playing a musical instrument and generates music in an unsupervised setting.
The balanced learning strategy enables BI-MAML to both outperform other state-of-the-art models in terms of classification accuracy for existing tasks and also accomplish efficient adaption to similar new tasks with less required shots.
Given inputs of body keypoints sequences obtained during various movements, our system associates the sequences with actions.
We show that the methodology provides a high-quality unsupervised categorization of movements.