Search Results for author: Yuxi Qian

Found 5 papers, 3 papers with code

A Language Agent for Autonomous Driving

1 code implementation17 Nov 2023 Jiageng Mao, Junjie Ye, Yuxi Qian, Marco Pavone, Yue Wang

Our approach, termed Agent-Driver, transforms the traditional autonomous driving pipeline by introducing a versatile tool library accessible via function calls, a cognitive memory of common sense and experiential knowledge for decision-making, and a reasoning engine capable of chain-of-thought reasoning, task planning, motion planning, and self-reflection.

Autonomous Driving Common Sense Reasoning +3

GPT-Driver: Learning to Drive with GPT

1 code implementation2 Oct 2023 Jiageng Mao, Yuxi Qian, Junjie Ye, Hang Zhao, Yue Wang

In this paper, we propose a novel approach to motion planning that capitalizes on the strong reasoning capabilities and generalization potential inherent to Large Language Models (LLMs).

Autonomous Driving Decision Making +2

Question-Driven Graph Fusion Network For Visual Question Answering

no code implementations3 Apr 2022 Yuxi Qian, Yuncong Hu, Ruonan Wang, Fangxiang Feng, Xiaojie Wang

It first models semantic, spatial, and implicit visual relations in images by three graph attention networks, then question information is utilized to guide the aggregation process of the three graphs, further, our QD-GFN adopts an object filtering mechanism to remove question-irrelevant objects contained in the image.

Graph Attention Object +4

Co-VQA : Answering by Interactive Sub Question Sequence

no code implementations Findings (ACL) 2022 Ruonan Wang, Yuxi Qian, Fangxiang Feng, Xiaojie Wang, Huixing Jiang

Most existing approaches to Visual Question Answering (VQA) answer questions directly, however, people usually decompose a complex question into a sequence of simple sub questions and finally obtain the answer to the original question after answering the sub question sequence(SQS).

Question Answering Visual Question Answering +1

Cannot find the paper you are looking for? You can Submit a new open access paper.