Search Results for author: Bailin Li

Found 5 papers, 2 papers with code

DriveVLM: The Convergence of Autonomous Driving and Large Vision-Language Models

no code implementations19 Feb 2024 Xiaoyu Tian, Junru Gu, Bailin Li, Yicheng Liu, Chenxu Hu, Yang Wang, Kun Zhan, Peng Jia, Xianpeng Lang, Hang Zhao

We introduce DriveVLM, an autonomous driving system leveraging Vision-Language Models (VLMs) for enhanced scene understanding and planning capabilities.

Autonomous Driving Scene Understanding

Knowledge-Routed Visual Question Reasoning: Challenges for Deep Representation Embedding

1 code implementation14 Dec 2020 Qingxing Cao, Bailin Li, Xiaodan Liang, Keze Wang, Liang Lin

Specifically, we generate the question-answer pair based on both the Visual Genome scene graph and an external knowledge base with controlled programs to disentangle the knowledge from other biases.

Question Answering Visual Question Answering

Explainable High-order Visual Question Reasoning: A New Benchmark and Knowledge-routed Network

no code implementations23 Sep 2019 Qingxing Cao, Bailin Li, Xiaodan Liang, Liang Lin

Explanation and high-order reasoning capabilities are crucial for real-world visual question answering with diverse levels of inference complexity (e. g., what is the dog that is near the girl playing with?)

Question Answering Visual Question Answering

Interpretable Visual Question Answering by Reasoning on Dependency Trees

no code implementations6 Sep 2018 Qingxing Cao, Bailin Li, Xiaodan Liang, Liang Lin

Collaborative reasoning for understanding image-question pairs is a very critical but underexplored topic in interpretable visual question answering systems.

Question Answering valid +1

Cannot find the paper you are looking for? You can Submit a new open access paper.