1 code implementation • 24 Oct 2023 • Hiroto Kurita, Goro Kobayashi, Sho Yokoi, Kentaro Inui
The performance of sentence encoders can be significantly improved through the simple practice of fine-tuning using contrastive loss.
1 code implementation • 23 Oct 2023 • Mengyu Ye, Tatsuki Kuribayashi, Jun Suzuki, Goro Kobayashi, Hiroaki Funayama
Large language models (LLMs) take advantage of step-by-step reasoning instructions, e. g., chain-of-thought (CoT) prompting.
no code implementations • 29 May 2023 • Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, Kentaro Inui
Prediction head is a crucial component of Transformer language models.
1 code implementation • 1 Feb 2023 • Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, Kentaro Inui
Transformers are ubiquitous in wide tasks.
2 code implementations • EMNLP 2021 • Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, Kentaro Inui
Transformer architecture has become ubiquitous in the natural language processing field.
1 code implementation • EMNLP 2020 • Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, Kentaro Inui
Attention is a key component of Transformers, which have recently achieved considerable success in natural language processing.