Search Results for author: Haotian Fu

Found 22 papers, 9 papers with code

Spiking Neural Network for Intra-cortical Brain Signal Decoding

1 code implementation12 Apr 2025 Song Yang, Haotian Fu, Herui Zhang, Peng Zhang, Wei Li, Dongrui Wu

Decoding brain signals accurately and efficiently is crucial for intra-cortical brain-computer interfaces.

Frequency-aware Event Cloud Network

no code implementations30 Dec 2024 Hongwei Ren, Fei Ma, Xiaopeng Lin, Yuetong Fang, Hongxiang Huang, Yulong Huang, Yue Zhou, Haotian Fu, ZiYi Yang, Fei Richard Yu, Bojun Cheng

Event cameras are biologically inspired sensors that emit events asynchronously with remarkable temporal resolution, garnering significant attention from both industry and academia.

Action Recognition Pose Estimation

Event-based Motion Deblurring via Multi-Temporal Granularity Fusion

no code implementations16 Dec 2024 Xiaopeng Lin, Hongwei Ren, Yulong Huang, Zunchang Liu, Yue Zhou, Haotian Fu, Biao Pan, Bojun Cheng

Effectively utilizing the high-temporal-resolution event data is crucial for extracting precise motion information and enhancing deblurring performance.

Deblurring Image Deblurring

Natural Language Reinforcement Learning

1 code implementation21 Nov 2024 Xidong Feng, Bo Liu, Ziyu Wan, Haotian Fu, Girish A. Koushik, Zhiyuan Hu, Mengyue Yang, Ying Wen, Jun Wang

Reinforcement Learning (RL) mathematically formulates decision-making with Markov Decision Process (MDP).

Decision Making reinforcement-learning +2

EPO: Hierarchical LLM Agents with Environment Preference Optimization

1 code implementation28 Aug 2024 Qi Zhao, Haotian Fu, Chen Sun, George Konidaris

Long-horizon decision-making tasks present significant challenges for LLM-based agents due to the need for extensive planning over multiple steps.

Action Generation Decision Making

Rethinking Efficient and Effective Point-based Networks for Event Camera Classification and Regression: EventMamba

1 code implementation9 May 2024 Hongwei Ren, Yue Zhou, Jiadong Zhu, Haotian Fu, Yulong Huang, Xiaopeng Lin, Yuetong Fang, Fei Ma, Hao Yu, Bojun Cheng

In contrast, Point Cloud is a popular representation for processing 3-dimensional data and serves as an alternative method to exploit local and global spatial features.

Action Recognition Mamba +1

Model-based Reinforcement Learning for Parameterized Action Spaces

2 code implementations3 Apr 2024 Renhao Zhang, Haotian Fu, Yilin Miao, George Konidaris

We propose a novel model-based reinforcement learning algorithm -- Dynamics Learning and predictive control with Parameterized Actions (DLPA) -- for Parameterized Action Markov Decision Processes (PAMDPs).

model Model-based Reinforcement Learning +2

A Simple and Effective Point-based Network for Event Camera 6-DOFs Pose Relocalization

no code implementations CVPR 2024 Hongwei Ren, Jiadong Zhu, Yue Zhou, Haotian Fu, Yulong Huang, Bojun Cheng

These cameras implicitly capture movement and depth information in events, making them appealing sensors for Camera Pose Relocalization (CPR) tasks.

SpikePoint: An Efficient Point-based Spiking Neural Network for Event Cameras Action Recognition

no code implementations11 Oct 2023 Hongwei Ren, Yue Zhou, Yulong Huang, Haotian Fu, Xiaopeng Lin, Jie Song, Bojun Cheng

Moreover, it also achieves SOTA performance across all methods on three datasets, utilizing approximately 0. 3\% of the parameters and 0. 5\% of power consumption employed by artificial neural networks (ANNs).

Action Recognition

TTPOINT: A Tensorized Point Cloud Network for Lightweight Action Recognition with Event Cameras

no code implementations19 Aug 2023 Hongwei Ren, Yue Zhou, Haotian Fu, Yulong Huang, Renjing Xu, Bojun Cheng

In the experiment, TTPOINT emerged as the SOTA method on three datasets while also attaining SOTA among point cloud methods on all five datasets.

Action Recognition

Model-based Lifelong Reinforcement Learning with Bayesian Exploration

2 code implementations20 Oct 2022 Haotian Fu, Shangqun Yu, Michael Littman, George Konidaris

We propose a model-based lifelong reinforcement-learning approach that estimates a hierarchical Bayesian posterior distilling the common structure shared across different tasks.

model reinforcement-learning +2

Meta-Learning Parameterized Skills

1 code implementation7 Jun 2022 Haotian Fu, Shangqun Yu, Saket Tiwari, Michael Littman, George Konidaris

We propose a novel parameterized skill-learning algorithm that aims to learn transferable parameterized skills and synthesize them into a new action space that supports efficient learning in long-horizon tasks.

Meta-Learning Robot Manipulation

Bayesian Exploration for Lifelong Reinforcement Learning

no code implementations29 Sep 2021 Haotian Fu, Shangqun Yu, Michael Littman, George Konidaris

A central question in reinforcement learning (RL) is how to leverage prior knowledge to accelerate learning in new tasks.

Lifelong learning reinforcement-learning +2

MGHRL: Meta Goal-generation for Hierarchical Reinforcement Learning

no code implementations30 Sep 2019 Haotian Fu, Hongyao Tang, Jianye Hao, Wulong Liu, Chen Chen

Most meta reinforcement learning (meta-RL) methods learn to adapt to new tasks by directly optimizing the parameters of policies over primitive action space.

Hierarchical Reinforcement Learning Meta-Learning +4

Efficient meta reinforcement learning via meta goal generation

no code implementations25 Sep 2019 Haotian Fu, Hongyao Tang, Jianye Hao

Meta reinforcement learning (meta-RL) is able to accelerate the acquisition of new tasks by learning from past experience.

Meta-Learning Meta Reinforcement Learning +3

Deep Multi-Agent Reinforcement Learning with Discrete-Continuous Hybrid Action Spaces

no code implementations12 Mar 2019 Haotian Fu, Hongyao Tang, Jianye Hao, Zihan Lei, Yingfeng Chen, Changjie Fan

Deep Reinforcement Learning (DRL) has been applied to address a variety of cooperative multi-agent problems with either discrete action spaces or continuous action spaces.

Deep Reinforcement Learning Multi-agent Reinforcement Learning +3

Cannot find the paper you are looking for? You can Submit a new open access paper.