no code implementations • 9 Apr 2024 • Kento Tatsuno, Daisuke Miyashita, Taiga Ikeda, Kiyoshi Ishiyama, Kazunari Sumiyoshi, Jun Deguchi
Despite it claims to save memory usage by loading compressed vectors by product quantization (PQ), its memory usage increases in proportion to the scale of datasets.
1 code implementation • 21 Aug 2023 • Yasuto Hoshi, Daisuke Miyashita, Youyang Ng, Kento Tatsuno, Yasuhiro Morioka, Osamu Torii, Jun Deguchi
Retrieval-augmented large language models (R-LLMs) combine pre-trained large language models (LLMs) with information retrieval systems to improve the accuracy of factual question-answering.
1 code implementation • 8 Aug 2023 • Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi
Large Language Model (LLM) based Generative AI systems have seen significant progress in recent years.
no code implementations • 9 Mar 2023 • Yasuto Hoshi, Daisuke Miyashita, Yasuhiro Morioka, Youyang Ng, Osamu Torii, Jun Deguchi
However, it has been shown that the existing dense retrievers do not generalize well not only out of domain but even in domain such as Wikipedia, especially when a named entity in a question is a dominant clue for retrieval.
Ranked #4 on Passage Retrieval on EntityQuestions
no code implementations • 3 Apr 2022 • Kengo Nakata, Youyang Ng, Daisuke Miyashita, Asuka Maki, Yu-Chieh Lin, Jun Deguchi
Moreover, users cannot verify the validity of inference results or evaluate the contribution of knowledge to the results.
Ranked #1 on Incremental Learning on ImageNet - 10 steps (using extra training data)
no code implementations • 25 Sep 2019 • Kengo Nakata, Daisuke Miyashita, Asuka Maki, Fumihiko Tachibana, Shinichi Sasaki, Jun Deguchi
These findings are available not only to improve the Pareto frontier for accuracy vs. computational cost, but also give us some new insights on deep neural network.