Search Results for author: Jun Deguchi

Found 6 papers, 2 papers with code

AiSAQ: All-in-Storage ANNS with Product Quantization for DRAM-free Information Retrieval

no code implementations9 Apr 2024 Kento Tatsuno, Daisuke Miyashita, Taiga Ikeda, Kiyoshi Ishiyama, Kazunari Sumiyoshi, Jun Deguchi

Despite it claims to save memory usage by loading compressed vectors by product quantization (PQ), its memory usage increases in proportion to the scale of datasets.

Information Retrieval Quantization +1

RaLLe: A Framework for Developing and Evaluating Retrieval-Augmented Large Language Models

1 code implementation21 Aug 2023 Yasuto Hoshi, Daisuke Miyashita, Youyang Ng, Kento Tatsuno, Yasuhiro Morioka, Osamu Torii, Jun Deguchi

Retrieval-augmented large language models (R-LLMs) combine pre-trained large language models (LLMs) with information retrieval systems to improve the accuracy of factual question-answering.

Information Retrieval Question Answering +1

Can a Frozen Pretrained Language Model be used for Zero-shot Neural Retrieval on Entity-centric Questions?

no code implementations9 Mar 2023 Yasuto Hoshi, Daisuke Miyashita, Yasuhiro Morioka, Youyang Ng, Osamu Torii, Jun Deguchi

However, it has been shown that the existing dense retrievers do not generalize well not only out of domain but even in domain such as Wikipedia, especially when a named entity in a question is a dominant clue for retrieval.

Domain Generalization Language Modelling +3

Revisiting a kNN-based Image Classification System with High-capacity Storage

no code implementations3 Apr 2022 Kengo Nakata, Youyang Ng, Daisuke Miyashita, Asuka Maki, Yu-Chieh Lin, Jun Deguchi

Moreover, users cannot verify the validity of inference results or evaluate the contribution of knowledge to the results.

 Ranked #1 on Incremental Learning on ImageNet - 10 steps (using extra training data)

Classification Continual Learning +3

Prune or quantize? Strategy for Pareto-optimally low-cost and accurate CNN

no code implementations25 Sep 2019 Kengo Nakata, Daisuke Miyashita, Asuka Maki, Fumihiko Tachibana, Shinichi Sasaki, Jun Deguchi

These findings are available not only to improve the Pareto frontier for accuracy vs. computational cost, but also give us some new insights on deep neural network.

Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.