EEG-based Emotional Video Classification via Learning Connectivity Structure

28 May 2019  ·  Soobeom Jang, Seong-Eun Moon, Jong-Seok Lee ·

Electroencephalography (EEG) is a useful way to implicitly monitor the users perceptual state during multimedia consumption. One of the primary challenges for the practical use of EEG-based monitoring is to achieve a satisfactory level of accuracy in EEG classification. Connectivity between different brain regions is an important property for the classification of EEG. However, how to define the connectivity structure for a given task is still an open problem, because there is no ground truth about how the connectivity structure should be in order to maximize the classification performance. In this paper, we propose an end-to-end neural network model for EEG-based emotional video classification, which can extract an appropriate multi-layer graph structure and signal features directly from a set of raw EEG signals and perform classification using them. Experimental results demonstrate that our method yields improved performance in comparison to the existing approaches where manually defined connectivity structures and signal features are used. Furthermore, we show that the graph structure extraction process is reliable in terms of consistency, and the learned graph structures make much sense in the viewpoint of emotional perception occurring in the brain.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here