no code implementations • COLING 2022 • Ran Ji, Jianmin Ji
Semantic parsing considers the task of mapping a natural language sentence into a target formal representation, where various sophisticated sequence-to-sequence (seq2seq) models have been applied with promising results.
no code implementations • 21 Sep 2024 • Guoliang You, Xiaomeng Chu, Yifan Duan, Xingchen Li, Sha Zhang, Jianmin Ji, Yanyong Zhang
For performance, the lane-level cross-modal query integration and feature enhancement module uses confidence score from ROI to combine low-confidence image queries with LiDAR queries, extracting complementary depth features.
no code implementations • 16 Jul 2024 • Guoliang You, Xiaomeng Chu, Yifan Duan, Wenyu Zhang, Xingchen Li, Sha Zhang, Yao Li, Jianmin Ji, Yanyong Zhang
In this work, we endeavor to integrate the perception of these elements into the planning task.
no code implementations • 2 Jul 2024 • Wenhao Yu, Jie Peng, Huanyu Yang, JunRui Zhang, Yifan Duan, Jianmin Ji, Yanyong Zhang
The complex conditional distribution in local navigation needs training data to include diverse policy in diverse real-world scenarios; (2) Myopic Observation.
no code implementations • 5 Jun 2024 • Xinrui Lin, Yangfan Wu, Huanyu Yang, Yu Zhang, Yanyong Zhang, Jianmin Ji
This plan is then refined by an ASP program with a robot's action knowledge, which integrates implementation details into the skeleton, grounding the LLM's abstract outputs in practical robot contexts.
1 code implementation • 22 May 2024 • Ruolin Wang, Yuejiao Xu, Jianmin Ji
Formal representations of traffic scenarios can be used to generate test cases for the safety verification of autonomous driving.
no code implementations • 5 Apr 2024 • Chenyang Wu, Yifan Duan, Xinran Zhang, Yu Sheng, Jianmin Ji, Yanyong Zhang
In this work, we present MM-Gaussian, a LiDAR-camera multi-modal fusion system for localization and mapping in unbounded scenes.
no code implementations • 4 Apr 2024 • Beibei Wang, Shuang Meng, Lu Zhang, Chenjie Wang, Jingjing Huang, Yao Li, Haojie Ren, Yuxuan Xiao, Yuru Peng, Jianmin Ji, Yu Zhang, Yanyong Zhang
Numerous roadside perception datasets have been introduced to propel advancements in autonomous driving and intelligent transportation systems research and development.
no code implementations • 26 Nov 2023 • Yuxuan Xiao, Yao Li, Chengzhen Meng, Xingchen Li, Jianmin Ji, Yanyong Zhang
The fusion of LiDARs and cameras has been increasingly adopted in autonomous driving for perception tasks.
1 code implementation • 25 Oct 2023 • Xingchen Li, Yifan Duan, Beibei Wang, Haojie Ren, Guoliang You, Yu Sheng, Jianmin Ji, Yanyong Zhang
The edge features, which are prevalent in various environments, are aligned in both images and point clouds to determine the extrinsic parameters.
no code implementations • 20 Oct 2023 • Wenhao Yu, Jie Peng, Quecheng Qiu, Hanyu Wang, Lu Zhang, Jianmin Ji
However, two roadblocks arise for training a DRL policy that outputs paths: (1) The action space for potential paths often involves higher dimensions comparing to low-level commands, which increases the difficulties of training; (2) It takes multiple time steps to track a path instead of a single time step, which requires the path to predicate the interactions of the robot w. r. t.
1 code implementation • CVPR 2023 • Yingjie Wang, Jiajun Deng, Yao Li, Jinshui Hu, Cong Liu, Yu Zhang, Jianmin Ji, Wanli Ouyang, Yanyong Zhang
LiDAR and Radar are two complementary sensing approaches in that LiDAR specializes in capturing an object's 3D shape while Radar provides longer detection ranges as well as velocity hints.
no code implementations • 4 Apr 2023 • ZiMing Wang, Yujiang Liu, Yifan Duan, Xingchen Li, Xinran Zhang, Jianmin Ji, Erbao Dong, Yanyong Zhang
In this paper, we present the USTC FLICAR Dataset, which is dedicated to the development of simultaneous localization and mapping and precise 3D reconstruction of the workspace for heavy-duty autonomous aerial work robots.
no code implementations • 22 Mar 2023 • Guoliang You, Xiaomeng Chu, Yifan Duan, Jie Peng, Jianmin Ji, Yu Zhang, Yanyong Zhang
In particular, we specify a prompt-transformer for representation conversion and propose a two-step training process to train the prompt-transformer for the target environment, while the rest of the DRL pipeline remains unchanged.
no code implementations • 22 Mar 2023 • Yuan Chen, Quecheng Qiu, Xiangyu Liu, Guangda Chen, Shunyi Yao, Jie Peng, Jianmin Ji, Yanyong Zhang
The planner learns to assign different importance to the geometric features and encourages the robot to navigate through areas that are helpful for laser localization.
no code implementations • 4 Feb 2023 • Haojie Ren, Sha Zhang, Sugang Li, Yao Li, Xinchen Li, Jianmin Ji, Yu Zhang, Yanyong Zhang
In this paper, we propose TrajMatch -- the first system that can automatically calibrate for roadside LiDARs in both time and space.
no code implementations • 13 Jan 2023 • Xiaomeng Chu, Jiajun Deng, Yuan Zhao, Jianmin Ji, Yu Zhang, Houqiang Li, Yanyong Zhang
To this end, we propose OA-BEV, a network that can be plugged into the BEV-based 3D object detection framework to bring out the objects by incorporating object-aware pseudo-3D features and depth features.
1 code implementation • 7 Nov 2022 • Yi Zhai, Yu Zhang, Shuo Liu, Xiaomeng Chu, Jie Peng, Jianmin Ji, Yanyong Zhang
Instead of extracting features from the tensor program itself, TLP extracts features from the schedule primitives.
no code implementations • 25 Dec 2021 • Defeng Xie, Jianmin Ji, Jiafei Xu, Ran Ji
The dependency tree of a natural language sentence can capture the interactions between semantics and words.
no code implementations • 29 Nov 2021 • Hanqi Zhu, Jiajun Deng, Yu Zhang, Jianmin Ji, Qiuyu Mao, Houqiang Li, Yanyong Zhang
However, this approach often suffers from the mismatch between the resolution of point clouds and RGB images, leading to sub-optimal performance.
1 code implementation • 13 Aug 2021 • Yu'an Chen, Ruosong Ye, Ziyang Tao, Hongjian Liu, Guangda Chen, Jie Peng, Jun Ma, Yu Zhang, Jianmin Ji, Yanyong Zhang
Deep reinforcement learning (DRL) algorithms have proven effective in robot navigation, especially in unknown environments, by directly mapping perception inputs into robot control commands.
1 code implementation • 6 Jul 2021 • Xiaomeng Chu, Jiajun Deng, Yao Li, Zhenxun Yuan, Yanyong Zhang, Jianmin Ji, Yu Zhang
As cameras are increasingly deployed in new application domains such as autonomous driving, performing 3D object detection on monocular images becomes an important task for visual scene understanding.
no code implementations • 24 Jun 2021 • Yingjie Wang, Qiuyu Mao, Hanqi Zhu, Jiajun Deng, Yu Zhang, Jianmin Ji, Houqiang Li, Yanyong Zhang
In this survey, we first introduce the background of popular sensors used for self-driving, their data properties, and the corresponding object detection algorithms.
no code implementations • 27 May 2021 • Peng Yin, Lingyun Xu, Jianmin Ji, Sebastian Scherer, Howie Choset
One of the main obstacles to 3D semantic segmentation is the significant amount of endeavor required to generate expensive point-wise annotations for fully supervised training.
1 code implementation • 14 Jan 2021 • Fengxiang He, Shiye Lei, Jianmin Ji, DaCheng Tao
We then define an {\it activation hash phase chart} to represent the space expanded by {model size}, training time, training sample size, and the encoding properties, which is divided into three canonical regions: {\it under-expressive regime}, {\it critically-expressive regime}, and {\it sufficiently-expressive regime}.
no code implementations • 2 Nov 2020 • Nan Lin, YuXuan Li, Yujun Zhu, Ruolin Wang, Xiayu Zhang, Jianmin Ji, Keke Tang, Xiaoping Chen, Xinming Zhang
Our meta policy tries to manipulate the next optimal state and actual action is produced by the inverse dynamics model.
no code implementations • 26 Feb 2019 • Peng Yin, Lingyun Xu, Xueqian Li, Chen Yin, Yingli Li, Rangaprasad Arun Srivatsan, Lu Li, Jianmin Ji, Yuqing He
Visual Place Recognition (VPR) is an important component in both computer vision and robotics applications, thanks to its ability to determine whether a place has been visited and where specifically.
no code implementations • 26 Feb 2019 • Peng Yin, Rangaprasad Arun Srivatsan, Yin Chen, Xueqian Li, Hongda Zhang, Lingyun Xu, Lu Li, Zhenzhong Jia, Jianmin Ji, Yuqing He
We propose MRS-VPR, a multi-resolution, sampling-based place recognition method, which can significantly improve the matching efficiency and accuracy in sequential matching.
no code implementations • 28 Aug 2018 • Shi Yin, Yi Zhou, Chenguang Li, Shangfei Wang, Jianmin Ji, Xiaoping Chen, Ruili Wang
We propose KDSL, a new word sense disambiguation (WSD) framework that utilizes knowledge to automatically generate sense-labeled data for supervised learning.
no code implementations • 6 Jul 2017 • Jianmin Ji, Fangfang Liu, Jia-Huai You
In this paper, we address this problem by formulating the notion of unfounded sets for nondisjunctive hybrid MKNF knowledge bases, based on which we propose and study two new well-founded operators.
no code implementations • 5 May 2014 • Jianmin Ji, Hannes Strass
The logic of knowledge and justified assumptions, also known as logic of grounded knowledge (GK), was proposed by Lin and Shoham as a general logic for nonmonotonic reasoning.