Search Results for author: Jianmin Ji

Found 31 papers, 7 papers with code

Transferring Knowledge from Structure-aware Self-attention Language Model to Sequence-to-Sequence Semantic Parsing

no code implementations COLING 2022 Ran Ji, Jianmin Ji

Semantic parsing considers the task of mapping a natural language sentence into a target formal representation, where various sophisticated sequence-to-sequence (seq2seq) models have been applied with promising results.

Code Generation Knowledge Distillation +3

LFP: Efficient and Accurate End-to-End Lane-Level Planning via Camera-LiDAR Fusion

no code implementations21 Sep 2024 Guoliang You, Xiaomeng Chu, Yifan Duan, Xingchen Li, Sha Zhang, Jianmin Ji, Yanyong Zhang

For performance, the lane-level cross-modal query integration and feature enhancement module uses confidence score from ROI to combine low-confidence image queries with LiDAR queries, extracting complementary depth features.

Autonomous Driving Sensor Fusion

LDP: A Local Diffusion Planner for Efficient Robot Navigation and Collision Avoidance

no code implementations2 Jul 2024 Wenhao Yu, Jie Peng, Huanyu Yang, JunRui Zhang, Yifan Duan, Jianmin Ji, Yanyong Zhang

The complex conditional distribution in local navigation needs training data to include diverse policy in diverse real-world scenarios; (2) Myopic Observation.

Collision Avoidance Robot Navigation

CLMASP: Coupling Large Language Models with Answer Set Programming for Robotic Task Planning

no code implementations5 Jun 2024 Xinrui Lin, Yangfan Wu, Huanyu Yang, Yu Zhang, Yanyong Zhang, Jianmin Ji

This plan is then refined by an ASP program with a robot's action knowledge, which integrates implementation details into the skeleton, grounding the LLM's abstract outputs in practical robot contexts.

Traffic Scenario Logic: A Spatial-Temporal Logic for Modeling and Reasoning of Urban Traffic Scenarios

1 code implementation22 May 2024 Ruolin Wang, Yuejiao Xu, Jianmin Ji

Formal representations of traffic scenarios can be used to generate test cases for the safety verification of autonomous driving.

Autonomous Driving Decision Making +1

MM-Gaussian: 3D Gaussian-based Multi-modal Fusion for Localization and Reconstruction in Unbounded Scenes

no code implementations5 Apr 2024 Chenyang Wu, Yifan Duan, Xinran Zhang, Yu Sheng, Jianmin Ji, Yanyong Zhang

In this work, we present MM-Gaussian, a LiDAR-camera multi-modal fusion system for localization and mapping in unbounded scenes.

Autonomous Vehicles

CORP: A Multi-Modal Dataset for Campus-Oriented Roadside Perception Tasks

no code implementations4 Apr 2024 Beibei Wang, Shuang Meng, Lu Zhang, Chenjie Wang, Jingjing Huang, Yao Li, Haojie Ren, Yuxuan Xiao, Yuru Peng, Jianmin Ji, Yu Zhang, Yanyong Zhang

Numerous roadside perception datasets have been introduced to propel advancements in autonomous driving and intelligent transportation systems research and development.

Autonomous Driving Instance Segmentation +1

EdgeCalib: Multi-Frame Weighted Edge Features for Automatic Targetless LiDAR-Camera Calibration

1 code implementation25 Oct 2023 Xingchen Li, Yifan Duan, Beibei Wang, Haojie Ren, Guoliang You, Yu Sheng, Jianmin Ji, Yanyong Zhang

The edge features, which are prevalent in various environments, are aligned in both images and point clouds to determine the extrinsic parameters.

Camera Calibration

PathRL: An End-to-End Path Generation Method for Collision Avoidance via Deep Reinforcement Learning

no code implementations20 Oct 2023 Wenhao Yu, Jie Peng, Quecheng Qiu, Hanyu Wang, Lu Zhang, Jianmin Ji

However, two roadblocks arise for training a DRL policy that outputs paths: (1) The action space for potential paths often involves higher dimensions comparing to low-level commands, which increases the difficulties of training; (2) It takes multiple time steps to track a path instead of a single time step, which requires the path to predicate the interactions of the robot w. r. t.

Collision Avoidance Robot Navigation

Bi-LRFusion: Bi-Directional LiDAR-Radar Fusion for 3D Dynamic Object Detection

1 code implementation CVPR 2023 Yingjie Wang, Jiajun Deng, Yao Li, Jinshui Hu, Cong Liu, Yu Zhang, Jianmin Ji, Wanli Ouyang, Yanyong Zhang

LiDAR and Radar are two complementary sensing approaches in that LiDAR specializes in capturing an object's 3D shape while Radar provides longer detection ranges as well as velocity hints.

object-detection Object Detection

USTC FLICAR: A Sensors Fusion Dataset of LiDAR-Inertial-Camera for Heavy-duty Autonomous Aerial Work Robots

no code implementations4 Apr 2023 ZiMing Wang, Yujiang Liu, Yifan Duan, Xingchen Li, Xinran Zhang, Jianmin Ji, Erbao Dong, Yanyong Zhang

In this paper, we present the USTC FLICAR Dataset, which is dedicated to the development of simultaneous localization and mapping and precise 3D reconstruction of the workspace for heavy-duty autonomous aerial work robots.

3D Reconstruction Autonomous Driving +2

$P^{3}O$: Transferring Visual Representations for Reinforcement Learning via Prompting

no code implementations22 Mar 2023 Guoliang You, Xiaomeng Chu, Yifan Duan, Jie Peng, Jianmin Ji, Yu Zhang, Yanyong Zhang

In particular, we specify a prompt-transformer for representation conversion and propose a two-step training process to train the prompt-transformer for the target environment, while the rest of the DRL pipeline remains unchanged.

reinforcement-learning

Deep Reinforcement Learning for Localizability-Enhanced Navigation in Dynamic Human Environments

no code implementations22 Mar 2023 Yuan Chen, Quecheng Qiu, Xiangyu Liu, Guangda Chen, Shunyi Yao, Jie Peng, Jianmin Ji, Yanyong Zhang

The planner learns to assign different importance to the geometric features and encourages the robot to navigate through areas that are helpful for laser localization.

Navigate reinforcement-learning

TrajMatch: Towards Automatic Spatio-temporal Calibration for Roadside LiDARs through Trajectory Matching

no code implementations4 Feb 2023 Haojie Ren, Sha Zhang, Sugang Li, Yao Li, Xinchen Li, Jianmin Ji, Yu Zhang, Yanyong Zhang

In this paper, we propose TrajMatch -- the first system that can automatically calibrate for roadside LiDARs in both time and space.

OA-BEV: Bringing Object Awareness to Bird's-Eye-View Representation for Multi-Camera 3D Object Detection

no code implementations13 Jan 2023 Xiaomeng Chu, Jiajun Deng, Yuan Zhao, Jianmin Ji, Yu Zhang, Houqiang Li, Yanyong Zhang

To this end, we propose OA-BEV, a network that can be plugged into the BEV-based 3D object detection framework to bring out the objects by incorporating object-aware pseudo-3D features and depth features.

3D Object Detection Object +1

TLP: A Deep Learning-based Cost Model for Tensor Program Tuning

1 code implementation7 Nov 2022 Yi Zhai, Yu Zhang, Shuo Liu, Xiaomeng Chu, Jie Peng, Jianmin Ji, Yanyong Zhang

Instead of extracting features from the tensor program itself, TLP extracts features from the schedule primitives.

Multi-Task Learning

Combining Improvements for Exploiting Dependency Trees in Neural Semantic Parsing

no code implementations25 Dec 2021 Defeng Xie, Jianmin Ji, Jiafei Xu, Ran Ji

The dependency tree of a natural language sentence can capture the interactions between semantics and words.

Ensemble Learning Semantic Parsing +1

VPFNet: Improving 3D Object Detection with Virtual Point based LiDAR and Stereo Data Fusion

no code implementations29 Nov 2021 Hanqi Zhu, Jiajun Deng, Yu Zhang, Jianmin Ji, Qiuyu Mao, Houqiang Li, Yanyong Zhang

However, this approach often suffers from the mismatch between the resolution of point clouds and RGB images, leading to sub-optimal performance.

3D Object Detection Data Augmentation +2

Reinforcement Learning for Robot Navigation with Adaptive Forward Simulation Time (AFST) in a Semi-Markov Model

1 code implementation13 Aug 2021 Yu'an Chen, Ruosong Ye, Ziyang Tao, Hongjian Liu, Guangda Chen, Jie Peng, Jun Ma, Yu Zhang, Jianmin Ji, Yanyong Zhang

Deep reinforcement learning (DRL) algorithms have proven effective in robot navigation, especially in unknown environments, by directly mapping perception inputs into robot control commands.

reinforcement-learning Reinforcement Learning (RL) +1

Neighbor-Vote: Improving Monocular 3D Object Detection through Neighbor Distance Voting

1 code implementation6 Jul 2021 Xiaomeng Chu, Jiajun Deng, Yao Li, Zhenxun Yuan, Yanyong Zhang, Jianmin Ji, Yu Zhang

As cameras are increasingly deployed in new application domains such as autonomous driving, performing 3D object detection on monocular images becomes an important task for visual scene understanding.

Autonomous Driving Monocular 3D Object Detection +4

Multi-Modal 3D Object Detection in Autonomous Driving: a Survey

no code implementations24 Jun 2021 Yingjie Wang, Qiuyu Mao, Hanqi Zhu, Jiajun Deng, Yu Zhang, Jianmin Ji, Houqiang Li, Yanyong Zhang

In this survey, we first introduce the background of popular sensors used for self-driving, their data properties, and the corresponding object detection algorithms.

3D Object Detection Autonomous Driving +4

3D Segmentation Learning from Sparse Annotations and Hierarchical Descriptors

no code implementations27 May 2021 Peng Yin, Lingyun Xu, Jianmin Ji, Sebastian Scherer, Howie Choset

One of the main obstacles to 3D semantic segmentation is the significant amount of endeavor required to generate expensive point-wise annotations for fully supervised training.

3D Semantic Segmentation Segmentation

Neural networks behave as hash encoders: An empirical study

1 code implementation14 Jan 2021 Fengxiang He, Shiye Lei, Jianmin Ji, DaCheng Tao

We then define an {\it activation hash phase chart} to represent the space expanded by {model size}, training time, training sample size, and the encoding properties, which is divided into three canonical regions: {\it under-expressive regime}, {\it critically-expressive regime}, and {\it sufficiently-expressive regime}.

A Multi-Domain Feature Learning Method for Visual Place Recognition

no code implementations26 Feb 2019 Peng Yin, Lingyun Xu, Xueqian Li, Chen Yin, Yingli Li, Rangaprasad Arun Srivatsan, Lu Li, Jianmin Ji, Yuqing He

Visual Place Recognition (VPR) is an important component in both computer vision and robotics applications, thanks to its ability to determine whether a place has been visited and where specifically.

Attribute Visual Place Recognition

MRS-VPR: a multi-resolution sampling based global visual place recognition method

no code implementations26 Feb 2019 Peng Yin, Rangaprasad Arun Srivatsan, Yin Chen, Xueqian Li, Hongda Zhang, Lingyun Xu, Lu Li, Zhenzhong Jia, Jianmin Ji, Yuqing He

We propose MRS-VPR, a multi-resolution, sampling-based place recognition method, which can significantly improve the matching efficiency and accuracy in sequential matching.

Loop Closure Detection Visual Navigation +1

KDSL: a Knowledge-Driven Supervised Learning Framework for Word Sense Disambiguation

no code implementations28 Aug 2018 Shi Yin, Yi Zhou, Chenguang Li, Shangfei Wang, Jianmin Ji, Xiaoping Chen, Ruili Wang

We propose KDSL, a new word sense disambiguation (WSD) framework that utilizes knowledge to automatically generate sense-labeled data for supervised learning.

Word Sense Disambiguation

Well-Founded Operators for Normal Hybrid MKNF Knowledge Bases

no code implementations6 Jul 2017 Jianmin Ji, Fangfang Liu, Jia-Huai You

In this paper, we address this problem by formulating the notion of unfounded sets for nondisjunctive hybrid MKNF knowledge bases, based on which we propose and study two new well-founded operators.

Implementing Default and Autoepistemic Logics via the Logic of GK

no code implementations5 May 2014 Jianmin Ji, Hannes Strass

The logic of knowledge and justified assumptions, also known as logic of grounded knowledge (GK), was proposed by Lin and Shoham as a general logic for nonmonotonic reasoning.

Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.