1 code implementation • 5 Sep 2023 • Yuxiang Yang, Yingqi Deng, Jiahao Nie, Jing Zhang
3D single object tracking (SOT) in point clouds is still a challenging problem due to appearance variation, distractors, and high sparsity of point clouds.
2 code implementations • 23 Apr 2023 • Jiahao Nie, Zhiwei He, Yuxiang Yang, Zhengyi Bao, Mingyu Gao, Jing Zhang
By integrating the derived classification scores with the center-ness scores, the resulting network can effectively suppress interference proposals and further mitigate task misalignment.
no code implementations • 17 Apr 2023 • Yuxiang Yang, Xiangyun Meng, Wenhao Yu, Tingnan Zhang, Jie Tan, Byron Boots
Jumping is essential for legged robots to traverse through difficult terrains.
1 code implementation • 1 Apr 2023 • Jiahao Nie, Zhiwei He, Yuxiang Yang, Xudong Lv, Mingyu Gao, Jing Zhang
Incorporating this transformer-based voting scheme into 3D RPN, a novel Siamese method dubbed GLT-T is developed for 3D single object tracking on point clouds.
no code implementations • 8 Feb 2023 • Xinyi Yang, Shiyu Huang, Yiwen Sun, Yuxiang Yang, Chao Yu, Wei-Wei Tu, Huazhong Yang, Yu Wang
Goal-conditioned hierarchical reinforcement learning (HRL) provides a promising direction to tackle this challenge by introducing a hierarchical structure to decompose the search space, where the low-level policy predicts primitive actions in the guidance of the goals derived from the high-level policy.
Hierarchical Reinforcement Learning
Multi-agent Reinforcement Learning
+2
2 code implementations • 20 Nov 2022 • Jiahao Nie, Zhiwei He, Yuxiang Yang, Mingyu Gao, Jing Zhang
Technically, a global-local transformer (GLT) module is employed to integrate object- and patch-aware prior into seed point features to effectively form strong feature representation for geometric positions of the seed points, thus providing more robust and accurate cues for offset learning.
no code implementations • 27 Jun 2022 • Yuxiang Yang, Xiangyun Meng, Wenhao Yu, Tingnan Zhang, Jie Tan, Byron Boots
Using only 40 minutes of human demonstration data, our framework learns to adjust the speed and gait of the robot based on perceived terrain semantics, and enables the robot to walk over 6km without failure at close-to-optimal speed.
4 code implementations • 12 Jun 2022 • Yuxiang Yang, Junjie Yang, Yufei Xu, Jing Zhang, Long Lan, DaCheng Tao
Based on APT-36K, we benchmark several representative models on the following three tracks: (1) supervised animal pose estimation on a single frame under intra- and inter-domain transfer learning settings, (2) inter-species domain generalization test for unseen animals, and (3) animal pose estimation with animal tracking.
no code implementations • 29 Apr 2022 • Jiahao Nie, Han Wu, Zhiwei He, Yuxiang Yang, Mingyu Gao, Zhekang Dong
In this paper, to alleviate this misalignment, we propose a novel tracking paradigm, called SiamLA.
1 code implementation • CVPR 2022 • Mingjin Zhang, Rui Zhang, Yuxiang Yang, Haichen Bai, Jing Zhang, Jie Guo
TOAA block calculates the low-level information with attention mechanism in both row and column directions and fuses it with the high-level information to capture the shape characteristic of targets and suppress noises.
1 code implementation • 9 Apr 2021 • Yuxiang Yang, Tingnan Zhang, Erwin Coumans, Jie Tan, Byron Boots
We focus on the problem of developing energy efficient controllers for quadrupedal robots.
1 code implementation • 19 Jan 2021 • Xingyou Song, Krzysztof Choromanski, Jack Parker-Holder, Yunhao Tang, Qiuyi Zhang, Daiyi Peng, Deepali Jain, Wenbo Gao, Aldo Pacchiano, Tamas Sarlos, Yuxiang Yang
In this paper, we approach the problem of optimizing blackbox functions over large hybrid search spaces consisting of both combinatorial and continuous parameters.
no code implementations • 14 Sep 2020 • Yuxiang Yang, Masahito Hayashi
Many quantum computational tasks have inherent symmetries, suggesting a path to enhancing their efficiency and performance.
Quantum Physics
no code implementations • 2 Mar 2020 • Xingyou Song, Yuxiang Yang, Krzysztof Choromanski, Ken Caluwaerts, Wenbo Gao, Chelsea Finn, Jie Tan
Learning adaptable policies is crucial for robots to operate autonomously in our complex and quickly changing world.
no code implementations • 14 Oct 2019 • Qiang Sun, Liting Wang, Maohui Li, Longtao Zhang, Yuxiang Yang
In the modern content-based image retrieval systems, there is an increasingly interest in constructing a computationally effective model to predict the interestingness of images since the measure of image interestingness could improve the human-centered search satisfaction and the user experience in different applications.
no code implementations • 25 Sep 2019 • Xingyou Song, Krzysztof Choromanski, Jack Parker-Holder, Yunhao Tang, Wenbo Gao, Aldo Pacchiano, Tamas Sarlos, Deepali Jain, Yuxiang Yang
We present a neural architecture search algorithm to construct compact reinforcement learning (RL) policies, by combining ENAS and ES in a highly scalable and intuitive way.
1 code implementation • ICLR 2020 • Xingyou Song, Wenbo Gao, Yuxiang Yang, Krzysztof Choromanski, Aldo Pacchiano, Yunhao Tang
We introduce ES-MAML, a new framework for solving the model agnostic meta learning (MAML) problem based on Evolution Strategies (ES).
no code implementations • 10 Jul 2019 • Xingyou Song, Krzysztof Choromanski, Jack Parker-Holder, Yunhao Tang, Wenbo Gao, Aldo Pacchiano, Tamas Sarlos, Deepali Jain, Yuxiang Yang
We present a neural architecture search algorithm to construct compact reinforcement learning (RL) policies, by combining ENAS and ES in a highly scalable and intuitive way.
no code implementations • 8 Jul 2019 • Yuxiang Yang, Ken Caluwaerts, Atil Iscen, Tingnan Zhang, Jie Tan, Vikas Sindhwani
We present a model-based framework for robot locomotion that achieves walking based on only 4. 5 minutes (45, 000 control steps) of data collected on a quadruped robot.
no code implementations • 14 Apr 2019 • Ge Bai, Yuxiang Yang, Giulio Chiribella
We design quantum compression algorithms for parametric families of tensor network states.
Quantum Physics
no code implementations • 7 Mar 2019 • Krzysztof Choromanski, Aldo Pacchiano, Jack Parker-Holder, Yunhao Tang, Deepali Jain, Yuxiang Yang, Atil Iscen, Jasmine Hsu, Vikas Sindhwani
Interest in derivative-free optimization (DFO) and "evolutionary strategies" (ES) has recently surged in the Reinforcement Learning (RL) community, with growing evidence that they can match state of the art methods for policy optimization problems in Robotics.
1 code implementation • 4 Mar 2019 • Yuxiang Yang, Ken Caluwaerts, Atil Iscen, Jie Tan, Chelsea Finn
To this end, we introduce a method that allows for self-adaptation of learned policies: No-Reward Meta Learning (NoRML).
no code implementations • 19 Jan 2019 • Jing Zhang, Jing Tian, Yang Cao, Yuxiang Yang, Xiaobin Xu
Early recognition of abnormal rhythms in ECG signals is crucial for monitoring and diagnosing patients' cardiac conditions, increasing the success rate of the treatment.