At the sequence level, the attention scores are visualized as the influence of different neighboring epochs in an input sequence (i. e. the context) to recognition of a target epoch, mimicking the way manual scoring is done by human experts.
Modern sleep monitoring development is shifting towards the use of unobtrusive sensors combined with algorithms for automatic sleep scoring.
The emergence of digital technologies such as smartphones in healthcare applications have demonstrated the possibility of developing rich, continuous, and objective measures of multiple sclerosis (MS) disability that can be administered remotely and out-of-clinic.
To our knowledge, this framework is the first example of a GAN capable of continuous ABP generation from an input PPG signal that also uses a federated learning methodology.
Detectable change points include abrupt changes in the slope, mean, variance, autocorrelation function and frequency spectrum.
This work proposes a sequence-to-sequence sleep staging model, XSleepNet, that is capable of learning a joint representation from both raw signals and time-frequency images.
The objective of this study was to use acceleration data recorded from smartphones to predict levels of depression in a population of participants diagnosed with bipolar disorder.
We employ the pretrained SeqSleepNet (i. e. the subject independent model) as a starting point and finetune it with the single-night personalization data to derive the personalized model.
The former constrains the generators to learn a common mapping that is iteratively applied at all enhancement stages and results in a small model footprint.
This study investigates a minimal set of sensors to achieve effective screening for RBD in the population, integrating automated sleep staging (three state) followed by RBD detection without the need for cumbersome electroencephalogram (EEG) sensors.
We employ the Montreal Archive of Sleep Studies (MASS) database consisting of 200 subjects as the source domain and study deep transfer learning on three different target domains: the Sleep Cassette subset and the Sleep Telemetry subset of the Sleep-EDF Expanded database, and the Surrey-cEEGrid database.
Ranked #1 on Multimodal Sleep Stage Detection on Sleep-EDF-SC
This work presents a deep transfer learning approach to overcome the channel mismatch problem and transfer knowledge from a large dataset to a small cohort to study automatic sleep staging with single-channel input.
Acoustic scenes are rich and redundant in their content.
This study also achieved automated sleep staging with a level of accuracy comparable to manual annotation.
Moreover, as model fusion with deep network ensemble is prevalent in audio scene classification, we further study whether, and if so, when model fusion is necessary for this task.
We propose a multi-label multi-task framework based on a convolutional recurrent neural network to unify detection of isolated and overlapping audio events.
At the sequence processing level, a recurrent layer placed on top of the learned epoch-wise features for long-term modelling of sequential epochs.
While the proposed framework is orthogonal to the widely adopted classification schemes, which take one or multiple epochs as contextual inputs and produce a single classification decision on the target epoch, we demonstrate its advantages in several ways.
Ranked #2 on Sleep Stage Detection on MASS SS2
Starting from a general convolutional neural network architecture, we allow the model to learn individual characteristics of the first night of sleep in order to quantify sleep stages of the second night.
Neurons and Cognition
Similarly, the convolutional neural network scored 72. 1% on the augmented database and 83% on the test set.
Significance: The proposed method enables successful compressed sensing of EEG signals even when the signals have no good sparse representation.