2 code implementations • 11 Apr 2024 • Yuichi Inoue, Kento Sasaki, Yuma Ochi, Kazuki Fujii, Kotaro Tanahashi, Yu Yamaguchi
Vision Language Models (VLMs) have undergone a rapid evolution, giving rise to significant advancements in the realm of multimodal understanding tasks.
1 code implementation • 11 Dec 2023 • Yuichi Inoue, Yuki Yada, Kotaro Tanahashi, Yu Yamaguchi
Visual Question Answering (VQA) is one of the most important tasks in autonomous driving, which requires accurate recognition and complex situation evaluations.
no code implementations • 11 Dec 2023 • Kotaro Tanahashi, Yuichi Inoue, Yu Yamaguchi, Hidetatsu Yaginuma, Daiki Shiotsuka, Hiroyuki Shimatani, Kohei Iwamasa, Yoshiaki Inoue, Takafumi Yamaguchi, Koki Igari, Tsukasa Horinouchi, Kento Tokuhiro, Yugo Tokuchi, Shunsuke Aoki
Various methods have been proposed for utilizing Large Language Models (LLMs) in autonomous driving.
no code implementations • 14 May 2023 • Shunsuke Aoki, Issei Yamamoto, Daiki Shiotsuka, Yuichi Inoue, Kento Tokuhiro, Keita Miwa
Fully autonomous driving has been widely studied and is becoming increasingly feasible.