EasyCom: An Augmented Reality Dataset to Support Algorithms for Easy Communication in Noisy Environments

Augmented Reality (AR) as a platform has the potential to facilitate the reduction of the cocktail party effect. Future AR headsets could potentially leverage information from an array of sensors spanning many different modalities. Training and testing signal processing and machine learning algorithms on tasks such as beam-forming and speech enhancement require high quality representative data. To the best of the author's knowledge, as of publication there are no available datasets that contain synchronized egocentric multi-channel audio and video with dynamic movement and conversations in a noisy environment. In this work, we describe, evaluate and release a dataset that contains over 5 hours of multi-modal data useful for training and testing algorithms for the application of improving conversations for an AR glasses wearer. We provide speech intelligibility, quality and signal-to-noise ratio improvement results for a baseline method and show improvements across all tested metrics. The dataset we are releasing contains AR glasses egocentric multi-channel microphone array audio, wide field-of-view RGB video, speech source pose, headset microphone audio, annotated voice activity, speech transcriptions, head bounding boxes, target of speech and source identification labels. We have created and are releasing this dataset to facilitate research in multi-modal AR solutions to the cocktail party problem.

PDF Abstract

Datasets


Introduced in the Paper:

EasyCom

Used in the Paper:

CHiME-5 DiPCo

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Speech Enhancement EasyCom MaxDI (Baseline) STOI 0.544 # 1
PESQ 1.17 # 1
ViSQOL 1.68 # 1
HASQI 0.249 # 1
SIIB 139 # 1
HASPI 0.830 # 1
ESTOI 0.379 # 1
SDR -12.9 # 1
SegSNR -12.2 # 1
SNR -10.1 # 1
SI-SDR -23.4 # 1

Methods


No methods listed for this paper. Add relevant methods here