CUNY Systems for the Query-by-Example Search on Speech Task at MediaEval 2015

This paper describes two query-by-example systems developed by Speech Lab, Queens College (CUNY). Our systems aimed to respond with quick search results from the selected reference files. Three phonetic recognizers (Czech, Hungarian and Russian) were utilized to get phoneme sequences of both query and reference speech files. Each query sequence were compared with all the reference sequences using both global and local aligners. In the first system, we predicted the most probable reference files based on the sequence alignment results; In the second system, we pruned out the subsequences from the reference sequences that yielded best local symbolic alignments, then 39-dimension MFCC features were extracted for both query and the subsequences. Both the two systems employed an optimized DTW, and obtained Cnxe of 0.9989 and 1.0674 on the test data respectively.

PDF
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Keyword Spotting QUESST CUNY [Subseq+MFCC] (eval) Cnxe 1.0674 # 54
MinCnxe 0.9853 # 63
ATWV -4.0205 # 33
MTWV 0.0006 # 35
Keyword Spotting QUESST CUNY [Subseq+MFCC] (dev) Cnxe 1.0658 # 53
MinCnxe 0.9823 # 62
ATWV -3.9820 # 33
MTWV 0.0123 # 32
Keyword Spotting QUESST CUNY [SMO+iSAX] (eval) Cnxe 0.9989 # 50
MinCnxe 0.9870 # 64
ATWV 0.0006 # 31
MTWV 0.0010 # 34
Keyword Spotting QUESST CUNY [SMO+iSAX] (dev) Cnxe 0.9988 # 49
MinCnxe 0.9872 # 65
ATWV 0.0011 # 30
MTWV 0.0067 # 33

Methods


No methods listed for this paper. Add relevant methods here