no code implementations • 29 Oct 2024 • Koki Wataoka, Tsubasa Takahashi, Ryokan Ri
To explore the causes, we hypothesize that LLMs may favor outputs that are more familiar to them, as indicated by lower perplexity.
no code implementations • 11 Oct 2024 • Shojiro Yamabe, Tsubasa Takahashi, Futa Waseda, Koki Wataoka
As the cost of training large language models (LLMs) rises, protecting their intellectual property has become increasingly critical.
no code implementations • 16 Oct 2023 • Keita Saito, Akifumi Wachi, Koki Wataoka, Youhei Akimoto
In recent years, Large Language Models (LLMs) have witnessed a remarkable surge in prevalence, altering the landscape of natural language processing and machine learning.