Voice-Face Cross-modal Matching and Retrieval: A Benchmark

21 Nov 2019  ·  Chuyuan Xiong, Deyuan Zhang, Tao Liu, Xiaoyong Du ·

Cross-modal associations between voice and face from a person can be learnt algorithmically, which can benefit a lot of applications. The problem can be defined as voice-face matching and retrieval tasks. Much research attention has been paid on these tasks recently. However, this research is still in the early stage. Test schemes based on random tuple mining tend to have low test confidence. Generalization ability of models can not be evaluated by small scale datasets. Performance metrics on various tasks are scarce. A benchmark for this problem needs to be established. In this paper, first, a framework based on comprehensive studies is proposed for voice-face matching and retrieval. It achieves state-of-the-art performance with various performance metrics on different tasks and with high test confidence on large scale datasets, which can be taken as a baseline for the follow-up research. In this framework, a voice anchored L2-Norm constrained metric space is proposed, and cross-modal embeddings are learned with CNN-based networks and triplet loss in the metric space. The embedding learning process can be more effective and efficient with this strategy. Different network structures of the framework and the cross language transfer abilities of the model are also analyzed. Second, a voice-face dataset (with 1.15M face data and 0.29M audio data) from Chinese speakers is constructed, and a convenient and quality controllable dataset collection tool is developed. The dataset and source code of the paper will be published together with this paper.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods