Search Results for author: Cheng Hu

Found 15 papers, 4 papers with code

Event-Priori-Based Vision-Language Model for Efficient Visual Understanding

no code implementations9 Jun 2025 Haotong Qin, Cheng Hu, Michele Magno

Its core contribution is a novel mechanism leveraging motion priors derived from dynamic event vision to enhance VLM efficiency.

Event-based vision Language Modeling +2

Towards Intelligent Edge Sensing for ISCC Network: Joint Multi-Tier DNN Partitioning and Beamforming Design

no code implementations30 Apr 2025 Peng Liu, Zesong Fei, Xinyi Wang, Xiaoyang Li, Weijie Yuan, Yuanhao Li, Cheng Hu, Dusit Niyato

To minimize the overall sensing task inference latency across all ISAC devices, we jointly optimize the DNN partitioning strategy, ISAC beamforming, and computational resource allocation at the MEC servers and devices, subject to sensing beampattern constraints.

Collaborative Inference Edge-computing +2

Enhancing Autonomous Driving Systems with On-Board Deployed Large Language Models

1 code implementation15 Apr 2025 Nicolas Baumann, Cheng Hu, Paviththiren Sivasothilingam, Haotong Qin, Lei Xie, Michele Magno, Luca Benini

Neural Networks (NNs) trained through supervised learning struggle with managing edge-case scenarios common in real-world driving due to the intractability of exhaustive datasets covering all edge-cases, making knowledge-driven approaches, akin to how humans intuitively detect unexpected driving behavior, a suitable complement to data-driven methods.

Autonomous Driving Computational Efficiency +4

Separated Contrastive Learning for Matching in Cross-domain Recommendation with Curriculum Scheduling

no code implementations22 Feb 2025 Heng Chang, Liang Gu, Cheng Hu, Zhinan Zhang, Hong Zhu, Yuhui Xu, Yuan Fang, Zhen Chen

Cross-domain recommendation (CDR) is a task that aims to improve the recommendation performance in a target domain by leveraging the information from source domains.

Contrastive Learning Recommendation Systems +3

RLPP: A Residual Method for Zero-Shot Real-World Autonomous Racing on Scaled Platforms

1 code implementation28 Jan 2025 Edoardo Ghignone, Nicolas Baumann, Cheng Hu, Jonathan Wang, Lei Xie, Andrea Carron, Michele Magno

This hybrid approach leverages the reliability and interpretability of PP while using RL to fine-tune the controller's performance in real-world scenarios.

Autonomous Racing Reinforcement Learning (RL)

Mitigating Social Bias in Large Language Models: A Multi-Objective Approach within a Multi-Agent Framework

1 code implementation20 Dec 2024 Zhenjie Xu, Wenqing Chen, Yi Tang, Xuanying Li, Cheng Hu, Zhixuan Chu, Kui Ren, Zibin Zheng, Zhichao Lu

Our experiments conducted on two datasets and two models demonstrate that MOMA reduces bias scores by up to 87. 7%, with only a marginal performance degradation of up to 6. 8% in the BBQ dataset.

Learning to Drift in Extreme Turning with Active Exploration and Gaussian Process Based MPC

no code implementations8 Oct 2024 Guoqiang Wu, Cheng Hu, Wangjia Weng, Zhouheng Li, Yonghao Fu, Lei Xie, Hongye Su

In the RC car experiment, the average lateral error with GPR is 36. 7% lower, and exploration further leads to a 29. 0% reduction.

GPR Model Predictive Control +1

Mini Honor of Kings: A Lightweight Environment for Multi-Agent Reinforcement Learning

1 code implementation6 Jun 2024 Lin Liu, Jian Zhao, Cheng Hu, Zhengtao Cao, Youpeng Zhao, Zhenbin Ye, Meng Meng, Wenjun Wang, Zhaofeng He, Houqiang Li, Xia Lin, Lanxiao Huang

To address these issues, we introduce the first publicly available map editor for the popular mobile game Honor of Kings and design a lightweight environment, Mini Honor of Kings (Mini HoK), for researchers to conduct experiments.

Multi-agent Reinforcement Learning

Attention and Prediction Guided Motion Detection for Low-Contrast Small Moving Targets

no code implementations27 Apr 2021 Hongxin Wang, Jiannan Zhao, Huatian Wang, Cheng Hu, Jigen Peng, Shigang Yue

The developed visual system comprises three main subsystems, namely, an attention module, an STMD-based neural network, and a prediction module.

Motion Detection Prediction

A Time-Delay Feedback Neural Network for Discriminating Small, Fast-Moving Targets in Complex Dynamic Environments

no code implementations29 Dec 2019 Hongxin Wang, Huatian Wang, Jiannan Zhao, Cheng Hu, Jigen Peng, Shigang Yue

Discriminating small moving objects within complex visual environments is a significant challenge for autonomous micro robots that are generally limited in computational power.

Motion Detection

An LGMD Based Competitive Collision Avoidance Strategy for UAV

no code implementations15 Apr 2019 Jiannan Zhao, Xingzao Ma, Qinbing Fu, Cheng Hu, Shigang Yue

In this paper, we present an LGMD based competitive collision avoidance method for UAV indoor navigation.

Collision Avoidance

Synthetic Neural Vision System Design for Motion Pattern Recognition in Dynamic Robot Scenes

no code implementations15 Apr 2019 Qinbing Fu, Cheng Hu, Pengcheng Liu, Shigang Yue

The presented system is a synthetic neural network, which comprises two complementary sub-systems with four spiking neurons -- the lobula giant movement detectors (LGMD1 and LGMD2) in locusts for sensing looming and recession, and the direction selective neurons (DSN-R and DSN-L) in flies for translational motion extraction.

Collision Avoidance Decision Making +1

A Bio-inspired Collision Detecotr for Small Quadcopter

no code implementations14 Jan 2018 Jiannan Zhao, Cheng Hu, Chun Zhang, Zhihua Wang, Shigang Yue

The observed results from the experiments demonstrated that the LGMD collision detector is feasible to work as a vision module for the quadcopter's collision avoidance task.

Collision Avoidance

Collision Selective Visual Neural Network Inspired by LGMD2 Neurons in Juvenile Locusts

no code implementations22 Dec 2017 Qinbing Fu, Cheng Hu, Shigang Yue

The results demonstrated this framework is able to detect looming dark objects embedded in bright backgrounds selectively, which make it ideal for ground mobile platforms.

Cannot find the paper you are looking for? You can Submit a new open access paper.