no code implementations • 15 Aug 2024 • Kento Nozawa, Takashi Masuko, Toru Taniguchi
We develop a large language model (LLM) based automatic speech recognition (ASR) system that can be contextualized by providing keywords as prior information in text prompts.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +4
no code implementations • 18 Apr 2022 • Kento Nozawa, Issei Sato
Representation learning enables us to automatically extract generic feature representations from a dataset to solve another machine learning task.
1 code implementation • 6 Oct 2021 • Han Bao, Yoshihiro Nagano, Kento Nozawa
Recent theoretical studies have attempted to explain the benefit of the large negative sample size by upper-bounding the downstream classification loss with the contrastive loss.
1 code implementation • NeurIPS 2021 • Kento Nozawa, Issei Sato
Instance discriminative self-supervised representation learning has been attracted attention thanks to its unsupervised nature and informative feature representation for downstream tasks.
1 code implementation • 10 Oct 2019 • Kento Nozawa, Pascal Germain, Benjamin Guedj
Contrastive unsupervised representation learning (CURL) is the state-of-the-art technique to learn representations (as a set of features) from unlabelled data.
no code implementations • 12 Feb 2019 • Kento Nozawa, Issei Sato
Learning sentence vectors from an unlabeled corpus has attracted attention because such vectors can represent sentences in a lower dimensional and continuous space.
1 code implementation • 18 Feb 2018 • Kento Nozawa, Masanari Kimura, Atsunori Kanemura
Embedding graph nodes into a vector space can allow the use of machine learning to e. g. predict node classes, but the study of node embedding algorithms is immature compared to the natural language processing field because of a diverse nature of graphs.