Search Results for author: Fangwei Zhong

Found 15 papers, 4 papers with code

Empowering Embodied Visual Tracking with Visual Foundation Models and Offline RL

no code implementations15 Apr 2024 Fangwei Zhong, Kui Wu, Hai Ci, Churan Wang, Hao Chen

We evaluate our tracker on several high-fidelity environments with challenging situations, such as distraction and occlusion.

Offline RL Q-Learning +2

Fast Peer Adaptation with Context-aware Exploration

no code implementations4 Feb 2024 Long Ma, Yuanfei Wang, Fangwei Zhong, Song-Chun Zhu, Yizhou Wang

To do so, it is crucial for the agent to efficiently probe and identify the peer's strategy, as this is the prerequisite for carrying out the best response in adaptation.

RSPT: Reconstruct Surroundings and Predict Trajectories for Generalizable Active Object Tracking

no code implementations7 Apr 2023 Fangwei Zhong, Xiao Bi, Yudi Zhang, Wei zhang, Yizhou Wang

However, building a generalizable active tracker that works robustly across different scenarios remains a challenge, especially in unstructured environments with cluttered obstacles and diverse layouts.

Autonomous Driving Object Tracking

Proactive Multi-Camera Collaboration For 3D Human Pose Estimation

no code implementations7 Mar 2023 Hai Ci, Mickel Liu, Xuehai Pan, Fangwei Zhong, Yizhou Wang

This paper presents a multi-agent reinforcement learning (MARL) scheme for proactive Multi-Camera Collaboration in 3D Human Pose Estimation in dynamic human crowds.

3D Human Pose Estimation 3D Reconstruction +1

GFPose: Learning 3D Human Pose Prior with Gradient Fields

1 code implementation CVPR 2023 Hai Ci, Mingdong Wu, Wentao Zhu, Xiaoxuan Ma, Hao Dong, Fangwei Zhong, Yizhou Wang

During the denoising process, GFPose implicitly incorporates pose priors in gradients and unifies various discriminative and generative tasks in an elegant framework.

Denoising Monocular 3D Human Pose Estimation +1

TarGF: Learning Target Gradient Field to Rearrange Objects without Explicit Goal Specification

no code implementations2 Sep 2022 Mingdong Wu, Fangwei Zhong, Yulong Xia, Hao Dong

For object rearrangement, the TarGF can be used in two ways: 1) For model-based planning, we can cast the target gradient into a reference control and output actions with a distributed path planner; 2) For model-free reinforcement learning, the TarGF is not only used for estimating the likelihood-change as a reward but also provides suggested actions in residual policy learning.

Imitation Learning Object +2

GraspARL: Dynamic Grasping via Adversarial Reinforcement Learning

no code implementations4 Mar 2022 Tianhao Wu, Fangwei Zhong, Yiran Geng, Hongchen Wang, Yongjian Zhu, Yizhou Wang, Hao Dong

we formulate the dynamic grasping problem as a 'move-and-grasp' game, where the robot is to pick up the object on the mover and the adversarial mover is to find a path to escape it.

Object reinforcement-learning +1

ToM2C: Target-oriented Multi-agent Communication and Cooperation with Theory of Mind

1 code implementation NeurIPS 2021 Yuanfei Wang, Fangwei Zhong, Jing Xu, Yizhou Wang

With ToM, each agent is capable of inferring the mental states and intentions of others according to its (local) observation.

Towards Distraction-Robust Active Visual Tracking

no code implementations18 Jun 2021 Fangwei Zhong, Peng Sun, Wenhan Luo, Tingyun Yan, Yizhou Wang

In active visual tracking, it is notoriously difficult when distracting objects appear, as distractors often mislead the tracker by occluding the target or bringing a confusing appearance.

Visual Tracking

Learning Multi-Agent Coordination for Enhancing Target Coverage in Directional Sensor Networks

1 code implementation NeurIPS 2020 Jing Xu, Fangwei Zhong, Yizhou Wang

Maximum target coverage by adjusting the orientation of distributed sensors is an important problem in directional sensor networks (DSNs).

Pose-Assisted Multi-Camera Collaboration for Active Object Tracking

no code implementations15 Jan 2020 Jing Li, Jing Xu, Fangwei Zhong, Xiangyu Kong, Yu Qiao, Yizhou Wang

In the system, each camera is equipped with two controllers and a switcher: The vision-based controller tracks targets based on observed images.

Object Object Tracking

AD-VAT: An Asymmetric Dueling mechanism for learning Visual Active Tracking

no code implementations ICLR 2019 Fangwei Zhong, Peng Sun, Wenhan Luo, Tingyun Yan, Yizhou Wang

In AD-VAT, both the tracker and the target are approximated by end-to-end neural networks, and are trained via RL in a dueling/competitive manner: i. e., the tracker intends to lockup the target, while the target tries to escape from the tracker.

CRAVES: Controlling Robotic Arm with a Vision-based Economic System

1 code implementation CVPR 2019 Yiming Zuo, Weichao Qiu, Lingxi Xie, Fangwei Zhong, Yizhou Wang, Alan L. Yuille

We also construct a vision-based control system for task accomplishment, for which we train a reinforcement learning agent in a virtual environment and apply it to the real-world.

3D Pose Estimation Domain Adaptation

End-to-end Active Object Tracking and Its Real-world Deployment via Reinforcement Learning

no code implementations10 Aug 2018 Wenhan Luo, Peng Sun, Fangwei Zhong, Wei Liu, Tong Zhang, Yizhou Wang

We further propose an environment augmentation technique and a customized reward function, which are crucial for successful training.

Object Object Tracking +1

End-to-end Active Object Tracking via Reinforcement Learning

no code implementations ICML 2018 Wenhan Luo, Peng Sun, Fangwei Zhong, Wei Liu, Tong Zhang, Yizhou Wang

We study active object tracking, where a tracker takes as input the visual observation (i. e., frame sequence) and produces the camera control signal (e. g., move forward, turn left, etc.).

Object Object Tracking +2

Cannot find the paper you are looking for? You can Submit a new open access paper.