Seq2Tok: Deep Sequence Tokenizer for Retrieval

29 Sep 2021  ·  Adhiraj Banerjee, Vipul Arora ·

Search over sequences is a fundamental problem. Very efficient solutions exist for text sequences, which are made up of discrete tokens chosen from a finite alphabet. Sequences, such as audio, video or sensor readings, are made up of continuous-valued samples with a large sampling rate, making similarity search inefficient. This paper proposes Seq2Tok, a deep sequence tokenizer that converts continuous-valued sequences to discrete tokens that are easier to retrieve via sequence queries. The only information available for training Seq2Tok is pairs of similar sequences, i.e., depending on how we form the pairs, the similarity semantics are learnt. Seq2Tok compresses the query and target sequences into short sequences of tokens that are faster to match. Experiments show consistent performance of Seq2Tok across various audio retrieval tasks, namely, music search (query by humming) and speech keyword search via audio query.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here