Search Results for author: Seiji Gobara

Found 2 papers, 1 papers with code

Do LLMs Implicitly Determine the Suitable Text Difficulty for Users?

1 code implementation22 Feb 2024 Seiji Gobara, Hidetaka Kamigaito, Taro Watanabe

Experimental results on the Stack-Overflow dataset and the TSCC dataset, including multi-turn conversation show that LLMs can implicitly handle text difficulty between user input and its generated response.

Question Answering

Evaluating Image Review Ability of Vision Language Models

no code implementations19 Feb 2024 Shigeki Saito, Kazuki Hayashi, Yusuke Ide, Yusuke Sakai, Kazuma Onishi, Toma Suzuki, Seiji Gobara, Hidetaka Kamigaito, Katsuhiko Hayashi, Taro Watanabe

Large-scale vision language models (LVLMs) are language models that are capable of processing images and text inputs by a single model.

Image Captioning

Cannot find the paper you are looking for? You can Submit a new open access paper.