no code implementations • ICML 2020 • Yiping Lu, Chao Ma, Yulong Lu, Jianfeng Lu, Lexing Ying
Specifically, we propose a \textbf{new continuum limit} of deep residual networks, which enjoys a good landscape in the sense that \textbf{every local minimizer is global}.
no code implementations • 26 Feb 2023 • Guangsheng Shi, Ruifeng Li, Chao Ma
The performance of point cloud 3D object detection hinges on effectively representing raw points, grid-based voxels or pillars.
no code implementations • 24 Jan 2023 • Ruibo Tu, Chao Ma, Cheng Zhang
ChatGPT has demonstrated exceptional proficiency in natural language conversation, e. g., it can answer a wide range of questions while no previous large language models can.
no code implementations • 18 Jan 2023 • Zongwei Wu, Guillaume Allibert, Fabrice Meriaudeau, Chao Ma, Cédric Demonceaux
In this paper, from a new perspective, we propose a novel Hierarchical Depth Awareness network (HiDAnet) for RGB-D saliency detection.
no code implementations • 13 Oct 2022 • Shuai Jia, Bangjie Yin, Taiping Yao, Shouhong Ding, Chunhua Shen, Xiaokang Yang, Chao Ma
For face recognition attacks, existing methods typically generate the l_p-norm perturbations on pixels, however, resulting in low attack transferability and high vulnerability to denoising defense models.
no code implementations • 13 Oct 2022 • Chao Ma, Lexing Ying
The knowledge consists of a set of vectors in the same embedding space as the input sequence, containing the information of the language used to process the input sequence.
no code implementations • 7 Oct 2022 • Daniel Kunin, Atsushi Yamamura, Chao Ma, Surya Ganguli
We introduce the class of quasi-homogeneous models, which is expressive enough to describe nearly all neural networks with homogeneous activations, even those with biases, residual connections, and normalization layers, while structured enough to enable geometric analysis of its gradient dynamics.
no code implementations • 28 Aug 2022 • Yinglong Wang, Chao Ma, Jianzhuang Liu
Inspired by our studies, we propose to remove rain by learning favorable deraining representations from other connected tasks.
no code implementations • 4 Aug 2022 • Ming Cheng, Yiling Xu, Wang Shen, M. Salman Asif, Chao Ma, Jun Sun, Zhan Ma
We utilize a disparity network to transfer spatiotemporal information across views even in large disparity scenes, based on which, we propose disparity-guided flow-based warping for LSR-HFR view and complementary warping for HSR-LFR view.
1 code implementation • 20 Jul 2022 • Shenyuan Gao, Chunluan Zhou, Chao Ma, Xinggang Wang, Junsong Yuan
However, the independent correlation computation in the attention mechanism could result in noisy and ambiguous attention weights, which inhibits further performance improvement.
Ranked #1 on
Visual Object Tracking
on NeedForSpeed
no code implementations • 8 Jun 2022 • Zongwei Wu, Guillaume Allibert, Christophe Stolz, Chao Ma, Cédric Demonceaux
Recent RGB-D semantic segmentation has motivated research interest thanks to the accessibility of complementary modalities from the input side.
Ranked #22 on
Semantic Segmentation
on NYU Depth v2
no code implementations • 7 Jun 2022 • Mingze Wang, Chao Ma
Generalization error bounds for deep neural networks trained by stochastic gradient descent (SGD) are derived by combining a dynamical control of an appropriate parameter norm and the Rademacher complexity estimate based on parameter norms.
1 code implementation • 5 Jun 2022 • Mingze Wang, Chao Ma
The convergence of GD and SGD when training mildly parameterized neural networks starting from random initialization is studied.
1 code implementation • 16 May 2022 • Guangsheng Shi, Ruifeng Li, Chao Ma
Real-time and high-performance 3D object detection is of critical importance for autonomous driving.
no code implementations • 24 Apr 2022 • Chao Ma, Daniel Kunin, Lei Wu, Lexing Ying
Numerically, we observe that neural network loss functions possesses a multiscale structure, manifested in two ways: (1) in a neighborhood of minima, the loss mixes a continuum of scales and grows subquadratically, and (2) in a larger region, the loss shows several separate scales clearly.
no code implementations • CVPR 2022 • Shuai Jia, Chao Ma, Taiping Yao, Bangjie Yin, Shouhong Ding, Xiaokang Yang
In addition, the proposed frequency attack enhances the transferability across face forgery detectors as black-box attacks.
1 code implementation • 18 Mar 2022 • Xingyu Ren, Alexandros Lattas, Baris Gecer, Jiankang Deng, Chao Ma, Xiaokang Yang, Stefanos Zafeiriou
Learning a dense 3D model with fine-scale details from a single facial image is highly challenging and ill-posed.
no code implementations • ICLR 2022 • Chao Ma, Lexing Ying
In this paper, we study the problem of finding mixed Nash equilibrium for mean-field two-player zero-sum games.
1 code implementation • 9 Feb 2022 • Ignacio Peis, Chao Ma, José Miguel Hernández-Lobato
Our experiments show that HH-VAEM outperforms existing baselines in the tasks of missing data imputation and supervised learning with missing features.
1 code implementation • 4 Feb 2022 • Tomas Geffner, Javier Antoran, Adam Foster, Wenbo Gong, Chao Ma, Emre Kiciman, Amit Sharma, Angus Lamb, Martin Kukla, Nick Pawlowski, Miltiadis Allamanis, Cheng Zhang
Causal inference is essential for data-driven decision making across domains such as business engagement, medical treatment and policy making.
no code implementations • CVPR 2022 • Yihan Zeng, Da Zhang, Chunwei Wang, Zhenwei Miao, Ting Liu, Xin Zhan, Dayang Hao, Chao Ma
LiDAR and camera are two common sensors to collect data in time for 3D object detection under the autonomous driving context.
no code implementations • CVPR 2022 • Junyi Cao, Chao Ma, Taiping Yao, Shen Chen, Shouhong Ding, Xiaokang Yang
Reconstruction learning over real images enhances the learned representations to be aware of forgery patterns that are even unknown, while classification learning takes the charge of mining the essential discrepancy between real and fake images, facilitating the understanding of forgeries.
no code implementations • NeurIPS 2021 • Zeng Yihan, Chunwei Wang, Yunbo Wang, Hang Xu, Chaoqiang Ye, Zhen Yang, Chao Ma
First, 3D-CoCo is inspired by our observation that the bird-eye-view (BEV) features are more transferable than low-level geometry features.
no code implementations • NeurIPS 2021 • Chao Ma, José Miguel Hernández-Lobato
In this paper, we propose a new solution to this problem called Functional Variational Inference (FVI).
1 code implementation • NeurIPS 2021 • Chao Ma, Cheng Zhang
In this work, we fill in this gap by systematically analyzing the identifiability of generative models under MNAR.
no code implementations • 17 Oct 2021 • Chao Ma, Lexing Ying
Later, the infinite-width limit of the two-layer neural networks with BN is considered, and a mean-field formulation is derived for the training dynamics.
no code implementations • 10 Oct 2021 • Zongwei Wu, Guillaume Allibert, Christophe Stolz, Chao Ma, Cédric Demonceaux
Recent RGBD-based models for saliency detection have attracted research attention.
1 code implementation • ICCV 2021 • Jilai Zheng, Chao Ma, Houwen Peng, Xiaokang Yang
In this paper, we propose to learn an Unsupervised Single Object Tracker (USOT) from scratch.
no code implementations • CVPR 2021 • Chunwei Wang, Chao Ma, Ming Zhu, Xiaokang Yang
On one hand, PointAugmenting decorates point clouds with corresponding point-wise CNN features extracted by pretrained 2D detection models, and then performs 3D object detection over the decorated point clouds.
no code implementations • CVPR 2021 • Yinglong Wang, Chao Ma, Bing Zeng
In this work, we aim to exploit the intrinsic priors of rainy images and develop intrinsic loss functions to facilitate training deraining networks, which decompose a rainy image into a rain-free background layer and a rainy layer containing intact rain streaks.
no code implementations • CVPR 2021 • Yangye Fu, Ming Zhang, Xing Xu, Zuo Cao, Chao Ma, Yanli Ji, Kai Zuo, Huimin Lu
By assuming that the source and target domains share consistent key feature representations and identical label space, existing studies on MSDA typically utilize the entire union set of features from both the source and target domains to obtain the feature map and align the map for each category and domain.
1 code implementation • 5 Jun 2021 • Zeyu Yan, Fei Wen, Rendong Ying, Chao Ma, Peilin Liu
This paper provides nontrivial results theoretically revealing that, \textit{1}) the cost of achieving perfect perception quality is exactly a doubling of the lowest achievable MSE distortion, \textit{2}) an optimal encoder for the "classic" rate-distortion problem is also optimal for the perceptual compression problem, \textit{3}) distortion loss is unnecessary for training a perceptual decoder.
1 code implementation • NeurIPS 2021 • Chao Ma, Lexing Ying
The multiplicative structure of parameters and input data in the first layer of neural networks is explored to build connection between the landscape of the loss function with respect to parameters and the landscape of the model function with respect to input data.
no code implementations • 2 Apr 2021 • Ze Ma, Yifan Yao, Pan Ji, Chao Ma
Estimating 3D human pose and shape from a single image is highly under-constrained.
no code implementations • 30 Mar 2021 • Yuqing Li, Tao Luo, Chao Ma
In an attempt to better understand structural benefits and generalization power of deep neural networks, we firstly present a novel graph theoretical formulation of neural network models, including fully connected, residual network (ResNet) and densely connected networks (DenseNet).
1 code implementation • CVPR 2021 • Shuai Jia, Yibing Song, Chao Ma, Xiaokang Yang
Recently, adversarial attack has been applied to visual object tracking to evaluate the robustness of deep trackers.
1 code implementation • 19 Feb 2021 • Weixia Zhang, Dingquan Li, Chao Ma, Guangtao Zhai, Xiaokang Yang, Kede Ma
In this paper, we formulate continual learning for BIQA, where a model learns continually from a stream of IQA datasets, building on what was learned from previously seen data.
no code implementations • 14 Dec 2020 • Chao Ma, Lexing Ying
A new understanding of adversarial examples and adversarial robustness is proposed by decoupling the data generator and the label generator (which we call the teacher).
no code implementations • 2 Dec 2020 • Weijie He, Xiaohao Mao, Chao Ma, Yu Huang, José Miguel Hernández-Lobato, Ting Chen
To address the challenge, we propose a non-RL Bipartite Scalable framework for Online Disease diAgnosis, called BSODA.
no code implementations • 22 Nov 2020 • Weixia Zhang, Chao Ma, Qi Wu, Xiaokang Yang
We then propose to recursively alternate the learning schemes of imitation and exploration to narrow the discrepancy between training and inference.
1 code implementation • 17 Oct 2020 • Shuai Xie, Zunlei Feng, Ying Chen, Songtao Sun, Chao Ma, Mingli Song
To deal with this problem, we propose a semantic Difficulty-awarE Active Learning (DEAL) network composed of two branches: the common segmentation branch and the semantic difficulty branch.
no code implementations • NeurIPS 2020 • Pan Zhou, Jiashi Feng, Chao Ma, Caiming Xiong, Steven Hoi, Weinan E
The result shows that (1) the escaping time of both SGD and ADAM~depends on the Radon measure of the basin positively and the heaviness of gradient noise negatively; (2) for the same basin, SGD enjoys smaller escaping time than ADAM, mainly because (a) the geometry adaptation in ADAM~via adaptively scaling each gradient coordinate well diminishes the anisotropic structure in gradient noise and results in larger Radon measure of a basin; (b) the exponential gradient average in ADAM~smooths its gradient and leads to lighter gradient noise tails than SGD.
1 code implementation • 11 Oct 2020 • Da Zheng, Chao Ma, Minjie Wang, Jinjing Zhou, Qidong Su, Xiang Song, Quan Gan, Zheng Zhang, George Karypis
To minimize the overheads associated with distributed computations, DistDGL uses a high-quality and light-weight min-cut graph partitioning algorithm along with multiple balancing constraints.
no code implementations • 11 Oct 2020 • Chao Ma, Guohua Gu, Xin Miao, Minjie Wan, Weixian Qian, Kan Ren, Qian Chen
Infrared target tracking plays an important role in both civil and military fields.
no code implementations • 10 Oct 2020 • Ruixue Tang, Chao Ma
There are two main lines of research on visual question answering (VQA): compositional model with explicit multi-hop reasoning, and monolithic network with implicit reasoning in the latent feature space.
no code implementations • 22 Sep 2020 • Weinan E, Chao Ma, Stephan Wojtowytsch, Lei Wu
The purpose of this article is to review the achievements made in the last few years towards the understanding of the reasons behind the success and subtleties of neural network-based machine learning.
no code implementations • 14 Sep 2020 • Zhong Li, Chao Ma, Lei Wu
The approach is motivated by approximating the general activation functions with one-dimensional ReLU networks, which reduces the problem to the complexity controls of ReLU networks.
no code implementations • 14 Sep 2020 • Chao Ma, Lei Wu, Weinan E
The dynamic behavior of RMSprop and Adam algorithms is studied through a combination of careful numerical experiments and theoretical explanations.
no code implementations • 16 Aug 2020 • Ming Zhu, Chao Ma, Pan Ji, Xiaokang Yang
In this paper, we focus on exploring the fusion of images and point clouds for 3D object detection in view of the complementary nature of the two modalities, i. e., images possess more semantic information while point clouds specialize in distance sensing.
no code implementations • 13 Aug 2020 • Chao Ma, Lei Wu, Weinan E
The random feature model exhibits a kind of resonance behavior when the number of parameters is close to the training sample size.
1 code implementation • ECCV 2020 • Yinglong Wang, Yibing Song, Chao Ma, Bing Zeng
Single image deraining regards an input image as a fusion of a background image, a transmission map, rain streaks, and atmosphere light.
1 code implementation • 22 Jul 2020 • Ning Wang, Wengang Zhou, Yibing Song, Chao Ma, Wei Liu, Houqiang Li
The advancement of visual tracking has continuously been brought by deep learning models.
1 code implementation • ECCV 2020 • Shuai Jia, Chao Ma, Yibing Song, Xiaokang Yang
On one hand, we add the temporal perturbations into the original video sequences as adversarial examples to greatly degrade the tracking performance.
1 code implementation • ECCV 2020 • Ruixue Tang, Chao Ma, Wei Emma Zhang, Qi Wu, Xiaokang Yang
However, there are few works studying the data augmentation problem for VQA and none of the existing image based augmentation schemes (such as rotation and flipping) can be directly applied to VQA due to its semantic structure -- an $\langle image, question, answer\rangle$ triplet needs to be maintained correctly.
1 code implementation • 25 Jun 2020 • Chao Ma, Lei Wu, Weinan E
A numerical and phenomenological study of the gradient descent (GD) algorithm for training two-layer neural network models is carried out for different parameter regimes when the target function can be accurately approximated by a relatively small number of neurons.
2 code implementations • NeurIPS 2020 • Chao Ma, Sebastian Tschiatschek, José Miguel Hernández-Lobato, Richard Turner, Cheng Zhang
Deep generative models often perform poorly in real-world applications due to the heterogeneity of natural data sets.
1 code implementation • 18 Apr 2020 • Da Zheng, Xiang Song, Chao Ma, Zeyuan Tan, Zihao Ye, Jin Dong, Hao Xiong, Zheng Zhang, George Karypis
Experiments on knowledge graphs consisting of over 86M nodes and 338M edges show that DGL-KE can compute embeddings in 100 minutes on an EC2 instance with 8 GPUs and 30 minutes on an EC2 cluster with 4 machines with 48 cores/machine.
Distributed, Parallel, and Cluster Computing
no code implementations • 11 Mar 2020 • Yiping Lu, Chao Ma, Yulong Lu, Jianfeng Lu, Lexing Ying
Specifically, we propose a new continuum limit of deep residual networks, which enjoys a good landscape in the sense that every local minimizer is global.
no code implementations • ICLR Workshop DeepDiffEq 2019 • Yiping Lu, Chao Ma, Yulong Lu, Jianfeng Lu, Lexing Ying
Specifically, we propose a \textbf{new continuum limit} of deep residual networks, which enjoys a good landscape in the sense that \textbf{every local minimizer is global}.
1 code implementation • CVPR 2019 • Xiankai Lu, Wenguan Wang, Chao Ma, Jianbing Shen, Ling Shao, Fatih Porikli
We introduce a novel network, called CO-attention Siamese Network (COSNet), to address the unsupervised video object segmentation task from a holistic view.
Semantic Segmentation
Unsupervised Video Object Segmentation
+2
no code implementations • 30 Dec 2019 • Weinan E, Chao Ma, Lei Wu
We demonstrate that conventional machine learning models and algorithms, such as the random feature model, the two-layer neural network model and the residual neural network model, can all be recovered (in a scaled form) as particular discretizations of different continuous formulations.
no code implementations • 15 Dec 2019 • Weinan E, Chao Ma, Lei Wu
We study the generalization properties of minimum-norm solutions for three over-parametrized machine learning models including the random feature model, the two-layer neural network model and the residual network model.
no code implementations • 25 Nov 2019 • Yinglong Wang, Chao Ma, Bing Zeng
Different rain models and novel network structures have been proposed to remove rain streaks from single rainy images.
no code implementations • NeurIPS 2019 • Lei Wu, Qingcan Wang, Chao Ma
We analyze the global convergence of gradient descent for deep linear residual networks by proposing a new initialization: zero-asymmetric (ZAS) initialization.
no code implementations • pproximateinference AABI Symposium 2019 • Chao Ma, Sebastian Tschiatschek, Yingzhen Li, Richard Turner, Jose Miguel Hernandez-Lobato, Cheng Zhang
In this paper, we focused on improving VAEs for real-valued data that has heterogeneous marginal distributions.
no code implementations • 25 Sep 2019 • Zhongpai Gao, Juyong Zhang, Yudong Guo, Chao Ma, Guangtao Zhai, Xiaokang Yang
Moreover, the identity and expression representations are entangled in these models, which hurdles many facial editing applications.
7 code implementations • 3 Sep 2019 • Minjie Wang, Da Zheng, Zihao Ye, Quan Gan, Mufei Li, Xiang Song, Jinjing Zhou, Chao Ma, Lingfan Yu, Yu Gai, Tianjun Xiao, Tong He, George Karypis, Jinyang Li, Zheng Zhang
Advancing research in the emerging field of deep graph learning requires new tools to support tensor computation over graphs.
Ranked #32 on
Node Classification
on Cora
1 code implementation • 23 Jul 2019 • Ning Wang, Wengang Zhou, Yibing Song, Chao Ma, Houqiang Li
In the distillation process, we propose a fidelity loss to enable the student network to maintain the representation capability of the teacher network.
no code implementations • 22 Jun 2019 • Yinglong Wang, Qinfeng Shi, Ehsan Abbasnejad, Chao Ma, Xiaoping Ma, Bing Zeng
Instead of using the estimated atmospheric light directly to learn a network to calculate transmission, we utilize it as ground truth and design a simple but novel triangle-shaped network structure to learn atmospheric light for every rainy image, then fine-tune the network to obtain a better estimation of atmospheric light during the training of transmission network.
no code implementations • 18 Jun 2019 • Weinan E, Chao Ma, Lei Wu
We define the Barron space and show that it is the right space for two-layer neural network models in the sense that optimal direct and inverse approximation theorems hold for functions in the Barron space.
no code implementations • ICLR 2019 • Lei Wu, Chao Ma, Weinan E
These new estimates are a priori in nature in the sense that the bounds depend only on some norms of the underlying functions to be fitted, not the parameters in the model.
no code implementations • 10 Apr 2019 • Weinan E, Chao Ma, Qingcan Wang, Lei Wu
In addition, it is also shown that the GD path is uniformly close to the functions given by the related random feature model.
no code implementations • 8 Apr 2019 • Weinan E, Chao Ma, Lei Wu
In the over-parametrized regime, it is shown that gradient descent dynamics can achieve zero training loss exponentially fast regardless of the quality of the labels.
no code implementations • CVPR 2019 • Xin Li, Chao Ma, Baoyuan Wu, Zhenyu He, Ming-Hsuan Yang
Despite demonstrated successes for numerous vision tasks, the contributions of using pre-trained deep features for visual tracking are not as significant as that for object recognition.
1 code implementation • CVPR 2019 • Ning Wang, Yibing Song, Chao Ma, Wengang Zhou, Wei Liu, Houqiang Li
We propose an unsupervised visual tracking method in this paper.
5 code implementations • CVPR 2019 • Wenbo Bao, Wei-Sheng Lai, Chao Ma, Xiaoyun Zhang, Zhiyong Gao, Ming-Hsuan Yang
The proposed model then warps the input frames, depth maps, and contextual features based on the optical flow and local interpolation kernels for synthesizing the output frame.
Ranked #5 on
Video Frame Interpolation
on Middlebury
no code implementations • 6 Mar 2019 • Weinan E, Chao Ma, Qingcan Wang
An important part of the regularized model is the usage of a new path norm, called the weighted path norm, as the regularization term.
1 code implementation • NeurIPS 2018 • Lei Wu, Chao Ma, Weinan E
The question of which global minima are accessible by a stochastic gradient decent (SGD) algorithm with specific learning rate and batch size is studied from the perspective of dynamical stability.
no code implementations • ICLR 2019 • Weinan E, Chao Ma, Lei Wu
New estimates for the population risk are established for two-layer neural networks.
1 code implementation • 12 Oct 2018 • Chao Ma, Tamir Bendory, Nicolas Boumal, Fred Sigworth, Amit Singer
In this problem, the goal is to estimate a (typically small) set of target images from a (typically large) collection of observations.
no code implementations • NeurIPS 2018 • Shi Pu, Yibing Song, Chao Ma, Honggang Zhang, Ming-Hsuan Yang
Visual attention, derived from cognitive neuroscience, facilitates human perception on the most pertinent subset of the sensory data.
no code implementations • 8 Oct 2018 • Chen Zhu, HengShu Zhu, Hui Xiong, Chao Ma, Fang Xie, Pengliang Ding, Pan Li
To this end, in this paper, we propose a novel end-to-end data-driven model based on Convolutional Neural Network (CNN), namely Person-Job Fit Neural Network (PJFNN), for matching a talent qualification to the requirements of a job.
1 code implementation • ICLR 2019 • Chao Ma, Sebastian Tschiatschek, Konstantina Palla, José Miguel Hernández-Lobato, Sebastian Nowozin, Cheng Zhang
Many real-life decision-making situations allow further relevant information to be acquired at a specific cost, for example, in assessing the health status of a patient we may decide to take additional measurements such as diagnostic tests or imaging scans before making a final assessment.
1 code implementation • ECCV 2018 • Xiankai Lu, Chao Ma, Bingbing Ni, Xiaokang Yang, Ian Reid, Ming-Hsuan Yang
Regression trackers directly learn a mapping from regularly dense samples of target objects to soft labels, which are usually generated by a Gaussian function, to estimate target positions.
no code implementations • 10 Aug 2018 • Chao Ma, Jianchun Wang, Weinan E
The well-known Mori-Zwanzig theory tells us that model reduction leads to memory effect.
no code implementations • COLING 2018 • Hamed Shahbazi, Xiaoli Z. Fern, Reza Ghaeini, Chao Ma, Rasha Obeidat, Prasad Tadepalli
In this paper, we present a novel model for entity disambiguation that combines both local contextual information and global evidences through Limited Discrepancy Search (LDS).
1 code implementation • 6 Jun 2018 • Chao Ma, Yingzhen Li, José Miguel Hernández-Lobato
We introduce the implicit processes (IPs), a stochastic process that places implicitly defined multivariate distributions over any finite collections of random variables.
no code implementations • CVPR 2018 • Yibing Song, Chao Ma, Xiaohe Wu, Lijun Gong, Linchao Bao, WangMeng Zuo, Chunhua Shen, Rynson Lau, Ming-Hsuan Yang
To augment positive samples, we use a generative network to randomly generate masks, which are applied to adaptively dropout input features to capture a variety of appearance changes.
no code implementations • ICCV 2017 • Yibing Song, Chao Ma, Lijun Gong, Jiawei Zhang, Rynson Lau, Ming-Hsuan Yang
Our method integrates feature extraction, response map generation as well as model update into the neural networks for an end-to-end training.
no code implementations • CVPR 2018 • Chao Ma, Chunhua Shen, Anthony Dick, Qi Wu, Peng Wang, Anton Van Den Hengel, Ian Reid
In this paper, we exploit a memory-augmented neural network to predict accurate answers to visual questions, even when those answers occur rarely in the training set.
1 code implementation • 12 Jul 2017 • Chao Ma, Jia-Bin Huang, Xiaokang Yang, Ming-Hsuan Yang
Specifically, we learn adaptive correlation filters on the outputs from each convolutional layer to encode the target appearance.
1 code implementation • 7 Jul 2017 • Chao Ma, Jia-Bin Huang, Xiaokang Yang, Ming-Hsuan Yang
Second, we learn a correlation filter over a feature pyramid centered at the estimated target position for predicting scale changes.
no code implementations • CVPR 2017 • Rui Yang, Bingbing Ni, Chao Ma, Yi Xu, Xiaokang Yang
We introduce a Multiple Granularity Analysis framework for video segmentation in a coarse-to-fine manner.
1 code implementation • 23 Jan 2017 • Yichao Yan, Bingbing Ni, Zhichao Song, Chao Ma, Yan Yan, Xiaokang Yang
We address the person re-identification problem by effectively exploiting a globally discriminative feature representation from a sequence of tracked human regions/patches.
2 code implementations • 18 Dec 2016 • Chao Ma, Chih-Yuan Yang, Xiaokang Yang, Ming-Hsuan Yang
Numerous single-image super-resolution algorithms have been proposed in the literature, but few studies address the problem of performance evaluation based on visual perception.
no code implementations • 24 Mar 2016 • Chao Ma, Tianchenghou, Bin Lan, Jinhui Xu, Zhenhua Zhang
Experimental data shows that, DEFE is able to train an ensemble of discriminative feature learners that boosts the overperformance of final prediction.
no code implementations • ICCV 2015 • Chao Ma, Jia-Bin Huang, Xiaokang Yang, Ming-Hsuan Yang
The outputs of the last convolutional layers encode the semantic information of targets and such representations are robust to significant appearance variations.
no code implementations • CVPR 2015 • Chao Ma, Xiaokang Yang, Chongyang Zhang, Ming-Hsuan Yang
In this paper, we address the problem of long-term visual tracking where the target objects undergo significant appearance variation due to deformation, abrupt motion, heavy occlusion and out-of-the-view.