Search Results for author: Xiaoyang Tan

Found 18 papers, 6 papers with code

M$^3$Net: Multilevel, Mixed and Multistage Attention Network for Salient Object Detection

1 code implementation15 Sep 2023 Yao Yuan, Pan Gao, Xiaoyang Tan

To overcome these, we propose the M$^3$Net, i. e., the Multilevel, Mixed and Multistage attention network for Salient Object Detection (SOD).

object-detection RGB Salient Object Detection +1

ProxyFormer: Proxy Alignment Assisted Point Cloud Completion with Missing Part Sensitive Transformer

1 code implementation CVPR 2023 Shanshan Li, Pan Gao, Xiaoyang Tan, Mingqiang Wei

Specifically, we fuse information into point proxy via feature and position extractor, and generate features for missing point proxies from the features of existing point proxies.

Point Cloud Completion

Contextual Conservative Q-Learning for Offline Reinforcement Learning

no code implementations3 Jan 2023 Ke Jiang, Jiayu Yao, Xiaoyang Tan

In this paper, we propose Contextual Conservative Q-Learning(C-CQL) to learn a robustly reliable policy through the contextual information captured via an inverse dynamics model.

Q-Learning reinforcement-learning +1

Robust Action Gap Increasing with Clipped Advantage Learning

no code implementations20 Mar 2022 Zhe Zhang, Yaozhong Gan, Xiaoyang Tan

Advantage Learning (AL) seeks to increase the action gap between the optimal action and its competitors, so as to improve the robustness to estimation errors.

Smoothing Advantage Learning

no code implementations20 Mar 2022 Yaozhong Gan, Zhe Zhang, Xiaoyang Tan

Advantage learning (AL) aims to improve the robustness of value-based reinforcement learning against estimation errors with action-gap-based regularization.

Greedy-Step Off-Policy Reinforcement Learning

no code implementations23 Feb 2021 Yuhui Wang, Qingyuan Wu, Pengcheng He, Xiaoyang Tan

Most of the policy evaluation algorithms are based on the theories of Bellman Expectation and Optimality Equation, which derive two popular approaches - Policy Iteration (PI) and Value Iteration (VI).

Q-Learning reinforcement-learning +1

Stabilizing Q Learning Via Soft Mellowmax Operator

no code implementations17 Dec 2020 Yaozhong Gan, Zhe Zhang, Xiaoyang Tan

Learning complicated value functions in high dimensional state space by function approximation is a challenging task, partially due to that the max-operator used in temporal difference updates can theoretically cause instability for most linear or non-linear approximation schemes.

Multi-agent Reinforcement Learning Q-Learning

SMIX($λ$): Enhancing Centralized Value Functions for Cooperative Multi-Agent Reinforcement Learning

1 code implementation11 Nov 2019 Xinghu Yao, Chao Wen, Yuhui Wang, Xiaoyang Tan

Learning a stable and generalizable centralized value function (CVF) is a crucial but challenging task in multi-agent reinforcement learning (MARL), as it has to deal with the issue that the joint action space increases exponentially with the number of agents in such scenarios.

reinforcement-learning Reinforcement Learning (RL) +2

Truly Proximal Policy Optimization

1 code implementation19 Mar 2019 Yuhui Wang, Hao He, Chao Wen, Xiaoyang Tan

Proximal policy optimization (PPO) is one of the most successful deep reinforcement-learning methods, achieving state-of-the-art performance across a wide range of challenging tasks.

Robust Reinforcement Learning in POMDPs with Incomplete and Noisy Observations

no code implementations15 Feb 2019 Yuhui Wang, Hao He, Xiaoyang Tan

In real-world scenarios, the observation data for reinforcement learning with continuous control is commonly noisy and part of it may be dynamically missing over time, which violates the assumption of many current methods developed for this.

Continuous Control Imputation +2

Trust Region-Guided Proximal Policy Optimization

2 code implementations NeurIPS 2019 Yuhui Wang, Hao He, Xiaoyang Tan, Yaozhong Gan

We formally show that this method not only improves the exploration ability within the trust region but enjoys a better performance bound compared to the original PPO as well.

Reinforcement Learning (RL)

A Unified Gender-Aware Age Estimation

no code implementations13 Sep 2016 Qing Tian, Songcan Chen, Xiaoyang Tan

Although leading to promotion of age estimation performance, such a concatenation not only likely confuses the semantics between the gender and age, but also ignores the aging discrepancy between the male and the female.

Age Estimation

Face Alignment In-the-Wild: A Survey

no code implementations15 Aug 2016 Xin Jin, Xiaoyang Tan

Over the last two decades, face alignment or localizing fiducial facial points has received increasing attention owing to its comprehensive applications in automatic face analysis.

Face Alignment Robust Face Alignment

Bayesian Neighbourhood Component Analysis

no code implementations8 Apr 2016 Dong Wang, Xiaoyang Tan

Learning a good distance metric in feature space potentially improves the performance of the KNN classifier and is useful in many real-world applications.

Bayesian Optimization Metric Learning

Tri-Subject Kinship Verification: Understanding the Core of A Family

no code implementations12 Jan 2015 Xiaoqian Qin, Xiaoyang Tan, Songcan Chen

One major challenge in computer vision is to go beyond the modeling of individual objects and to investigate the bi- (one-versus-one) or tri- (one-versus-two) relationship among multiple visual entities, answering such questions as whether a child in a photo belongs to given parents.

feature selection

Unsupervised Feature Learning with C-SVDDNet

no code implementations23 Dec 2014 Dong Wang, Xiaoyang Tan

To address this issue, we propose a SVDD based feature learning algorithm that describes the density and distribution of each cluster from K-means with an SVDD ball for more robust feature representation.

Image Classification Object Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.