Notably, we show that with as little as 3% labeled data available, FedSTAR on average can improve the recognition rate by 13. 28% compared to the fully supervised federated model.
Nevertheless, designing a deep neural architecture that performs competitively across various tasks is challenging as existing methods fail to capture long-range dependencies in the input sequences and perform poorly for lengthy process traces.
We, therefore, propose a combination of in-domain and cross-domain transfer learning strategies for damage detection in bridges.
We propose CHARM, a method for training a single neural network across inconsistent input channels.
We introduce COLA, a self-supervised pre-training approach for learning a general-purpose representation of audio.
Likewise, the learned representations with self-supervision are found to be highly transferable between related datasets, even when few labeled instances are available from the target domains.
Federated learning provides a compelling framework for learning models from decentralized data, but conventionally, it assumes the availability of labeled samples, whereas on-device data are generally either unlabeled or cannot be annotated readily through user interaction.
In this paper, we propose a multi-stream temporal convolutional network to address the problem of multi-label behavioral context recognition.
Stress can be seen as a physiological response to everyday emotional, mental and physical challenges.