Identification of Cognitive Workload during Surgical Tasks with Multimodal Deep Learning

The operating room (OR) is a dynamic and complex environment consisting of a multidisciplinary team working together in a high take environment to provide safe and efficient patient care. Additionally, surgeons are frequently exposed to multiple psycho-organisational stressors that may cause negative repercussions on their immediate technical performance and long-term health. Many factors can therefore contribute to increasing the Cognitive Workload (CWL) such as temporal pressures, unfamiliar anatomy or distractions in the OR. In this paper, a cascade of two machine learning approaches is suggested for the multimodal recognition of CWL in four different surgical task conditions. Firstly, a model based on the concept of transfer learning is used to identify if a surgeon is experiencing any CWL. Secondly, a Convolutional Neural Network (CNN) uses this information to identify different degrees of CWL associated to each surgical task. The suggested multimodal approach considers adjacent signals from electroencephalogram (EEG), functional near-infrared spectroscopy (fNIRS) and eye pupil diameter. The concatenation of signals allows complex correlations in terms of time (temporal) and channel location (spatial). Data collection was performed by a Multi-sensing AI Environment for Surgical Task & Role Optimisation platform (MAESTRO) developed at the Hamlyn Centre, Imperial College London. To compare the performance of the proposed methodology, a number of state-of-art machine learning techniques have been implemented. The tests show that the proposed model has a precision of 93%.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here