Search Results for author: Kaidong Wang

Found 6 papers, 1 papers with code

MUC: Mixture of Uncalibrated Cameras for Robust 3D Human Body Reconstruction

no code implementations8 Mar 2024 Yitao Zhu, Sheng Wang, Mengjie Xu, Zixu Zhuang, Zhixin Wang, Kaidong Wang, Han Zhang, Qian Wang

Next, instead of simply averaging models across views, we train a network to determine the weights of individual views for their fusion, based on the parameters estimated for joints and hands of human body as well as camera positions.

Provable Tensor Completion with Graph Information

no code implementations4 Oct 2023 Kaidong Wang, Yao Wang, Xiuwu Liao, Shaojie Tang, Can Yang, Deyu Meng

For the model, we establish a rigorous mathematical representation of the dynamic graph, based on which we derive a new tensor-oriented graph smoothness regularization.

Tensor Decomposition

Efficient Fraud Detection Using Deep Boosting Decision Trees

1 code implementation12 Feb 2023 Biao Xu, Yao Wang, Xiuwu Liao, Kaidong Wang

In this paper, we propose deep boosting decision trees (DBDT), a novel approach for fraud detection based on gradient boosting and neural networks.

Fraud Detection Representation Learning

Effective Streaming Low-tubal-rank Tensor Approximation via Frequent Directions

no code implementations23 Aug 2021 Qianxin Yi, Chenhao Wang, Kaidong Wang, Yao Wang

Low-tubal-rank tensor approximation has been proposed to analyze large-scale and multi-dimensional data.

Universal Consistency of Deep Convolutional Neural Networks

no code implementations23 Jun 2021 Shao-Bo Lin, Kaidong Wang, Yao Wang, Ding-Xuan Zhou

Compared with avid research activities of deep convolutional neural networks (DCNNs) in practice, the study of theoretical behaviors of DCNNs lags heavily behind.

SPLBoost: An Improved Robust Boosting Algorithm Based on Self-paced Learning

no code implementations20 Jun 2017 Kaidong Wang, Yao Wang, Qian Zhao, Deyu Meng, Zongben Xu

Specifically, the underlying loss being minimized by the traditional AdaBoost is the exponential loss, which is proved to be very sensitive to random noise/outliers.

Cannot find the paper you are looking for? You can Submit a new open access paper.