LRW-1000: A Naturally-Distributed Large-Scale Benchmark for Lip Reading in the Wild

Large-scale datasets have successively proven their fundamental importance in several research fields, especially for early progress in some emerging topics. In this paper, we focus on the problem of visual speech recognition, also known as lipreading, which has received increasing interest in recent years. We present a naturally-distributed large-scale benchmark for lip reading in the wild, named LRW-1000, which contains 1,000 classes with 718,018 samples from more than 2,000 individual speakers. Each class corresponds to the syllables of a Mandarin word composed of one or several Chinese characters. To the best of our knowledge, it is currently the largest word-level lipreading dataset and also the only public large-scale Mandarin lipreading dataset. This dataset aims at covering a "natural" variability over different speech modes and imaging conditions to incorporate challenges encountered in practical applications. It has shown a large variation in this benchmark in several aspects, including the number of samples in each class, video resolution, lighting conditions, and speakers' attributes such as pose, age, gender, and make-up. Besides providing a detailed description of the dataset and its collection pipeline, we evaluate several typical popular lipreading methods and perform a thorough analysis of the results from several aspects. The results demonstrate the consistency and challenges of our dataset, which may open up some new promising directions for future work.

PDF Abstract


Introduced in the Paper:

CAS-VSR-W1k (LRW-1000)

Used in the Paper:

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Lipreading CAS-VSR-W1k (LRW-1000) 3D Conv + ResNet-34 + Bi-GRU Top-1 Accuracy 38.19% # 9
Lipreading CAS-VSR-W1k (LRW-1000) DenseNet3D + Bi-GRU Top-1 Accuracy 34.76% # 10
Lipreading CAS-VSR-W1k (LRW-1000) Multi-Tower LSTM-5 Top-1 Accuracy 25.76% # 11


No methods listed for this paper. Add relevant methods here