1 code implementation • 8 Jun 2023 • Tianzhe Chu, Shengbang Tong, Tianjiao Ding, Xili Dai, Benjamin David Haeffele, Rene Vidal, Yi Ma
In this paper, we propose a novel image clustering pipeline that leverages the powerful feature representation of large pre-trained models such as CLIP and cluster images effectively and efficiently at scale.
1 code implementation • 1 Jun 2023 • Yaodong Yu, Sam Buchanan, Druv Pai, Tianzhe Chu, Ziyang Wu, Shengbang Tong, Benjamin D. Haeffele, Yi Ma
Particularly, we show that the standard transformer block can be derived from alternating optimization on complementary parts of this objective: the multi-head self-attention operator can be viewed as a gradient descent step to compress the token sets by minimizing their lossy coding rate, and the subsequent multi-layer perceptron can be viewed as attempting to sparsify the representation of the tokens.
1 code implementation • 2 May 2023 • Michael Psenka, Druv Pai, Vishal Raman, Shankar Sastry, Yi Ma
This work proposes an algorithm for explicitly constructing a pair of neural networks that linearize and reconstruct an embedded submanifold, from finite samples of this manifold.
1 code implementation • 8 Apr 2023 • Shengbang Tong, Yubei Chen, Yi Ma, Yann Lecun
Recently, self-supervised learning (SSL) has achieved tremendous success in learning image representation.
1 code implementation • 9 Mar 2023 • Mitsuhiko Nakamoto, Yuexiang Zhai, Anikait Singh, Max Sobol Mark, Yi Ma, Chelsea Finn, Aviral Kumar, Sergey Levine
Our approach, calibrated Q-learning (Cal-QL) accomplishes this by learning a conservative value function initialization that underestimates the value of the learned policy from offline data, while also being calibrated, in the sense that the learned Q-values are at a reasonable scale.
no code implementations • 18 Feb 2023 • Xili Dai, Ke Chen, Shengbang Tong, Jingyuan Zhang, Xingjian Gao, Mingyang Li, Druv Pai, Yuexiang Zhai, Xiaojun Yuan, Heung-Yeung Shum, Lionel M. Ni, Yi Ma
Our method is arguably the first to demonstrate that a concatenation of multiple convolution sparse coding/decoding layers leads to an interpretable and effective autoencoder for modeling the distribution of large-scale natural image datasets.
no code implementations • 26 Jan 2023 • Jinfei Wang, Yi Ma, Na Yi, Rahim Tafazolli
The design of iterative linear precoding is recently challenged by extremely large aperture array (ELAA) systems, where conventional preconditioning techniques could hardly improve the channel condition.
no code implementations • 25 Jan 2023 • Siqi Zhang, Na Yi, Yi Ma
When the number of subgraphs is maximized, the proposed subset selection approach is shown to be optimum in the AWGN channel.
no code implementations • 4 Jan 2023 • Tianjiao Ding, Shengbang Tong, Kwan Ho Ryan Chan, Xili Dai, Yi Ma, Benjamin D. Haeffele
Clustering data lying close to a union of low-dimensional manifolds, with each manifold as a cluster, is a fundamental problem in machine learning.
no code implementations • 24 Dec 2022 • Qiyang Li, Yuexiang Zhai, Yi Ma, Sergey Levine
Under mild regularity conditions on the curriculum, we show that sequentially solving each task in the multi-task RL problem is more computationally efficient than solving the original single-task problem, without any explicit exploration bonuses or other exploration strategies.
no code implementations • 28 Nov 2022 • Chen Chen, Hongyao Tang, Yi Ma, Chao Wang, Qianli Shen, Dong Li, Jianye Hao
The key idea of SA-PP is leveraging discounted stationary state distribution ratios between the learning policy and the offline dataset to modulate the degree of behavior regularization in a state-wise manner, so that pessimism can be implemented in a more appropriate way.
1 code implementation • 30 Oct 2022 • Shengbang Tong, Xili Dai, Yubei Chen, Mingyang Li, Zengyi Li, Brent Yi, Yann Lecun, Yi Ma
This paper proposes an unsupervised method for learning a unified representation that serves both discriminative and generative purposes.
1 code implementation • 24 Oct 2022 • Xili Dai, Mingyang Li, Pengyuan Zhai, Shengbang Tong, Xingjian Gao, Shao-Lun Huang, Zhihui Zhu, Chong You, Yi Ma
We show that such models have equally strong empirical performance on CIFAR-10, CIFAR-100, and ImageNet datasets when compared to conventional neural networks.
1 code implementation • 10 Oct 2022 • Haozhi Qi, Ashish Kumar, Roberto Calandra, Yi Ma, Jitendra Malik
Generalized in-hand manipulation has long been an unsolved challenge of robotics.
no code implementations • 30 Sep 2022 • Yubei Chen, Zeyu Yun, Yi Ma, Bruno Olshausen, Yann Lecun
Though there remains a small performance gap between our simple constructive model and SOTA methods, the evidence points to this as a promising direction for achieving a principled and white-box approach to unsupervised learning.
Ranked #1 on
Unsupervised MNIST
on MNIST
Self-Supervised Learning
Sparse Representation-based Classification
+3
no code implementations • 14 Sep 2022 • Kaidi Wang, Yi Ma, Mahdi Boloursaz Mashhadi, Chuan Heng Foh, Rahim Tafazolli, Zhi Ding
In this paper, federated learning (FL) over wireless networks is investigated.
no code implementations • 4 Aug 2022 • Jinfei Wang, Yi Ma, Na Yi, Rahim Tafazolli, Fei Tong
The basic concept of COP is to apply vector perturbation (VP) in the constellation domain instead of symbol domain; as often used in conventional techniques.
no code implementations • 4 Aug 2022 • Jinfei Wang, Yi Ma, Na Yi, Rahim Tafazolli
With imperfect CSIT, the proposed approach can still provide remarkable user capacity at limited cost of transmit-power efficiency.
1 code implementation • 13 Jul 2022 • Yaodong Yu, Alexander Wei, Sai Praneeth Karimireddy, Yi Ma, Michael I. Jordan
Leveraging this observation, we propose a Train-Convexify-Train (TCT) procedure to sidestep this issue: first, learn features using off-the-shelf methods (e. g., FedAvg); then, optimize a convexified problem obtained from the network's empirical neural tangent kernel approximation.
no code implementations • 11 Jul 2022 • Yi Ma, Doris Tsao, Heung-Yeung Shum
Ten years into the revival of deep networks and artificial intelligence, we propose a theoretical framework that sheds light on understanding deep networks within a bigger picture of Intelligence in general.
1 code implementation • 18 Jun 2022 • Druv Pai, Michael Psenka, Chih-Yuan Chiu, Manxi Wu, Edgar Dobriban, Yi Ma
We consider the problem of learning discriminative representations for data in a high-dimensional space with distribution supported on or around multiple low-dimensional linear subspaces.
no code implementations • 6 Jun 2022 • Yaodong Yu, Stephen Bates, Yi Ma, Michael I. Jordan
Uncertainty quantification is essential for the reliable deployment of machine learning models to high-stakes application domains.
no code implementations • 6 Apr 2022 • Tong Sang, Hongyao Tang, Yi Ma, Jianye Hao, Yan Zheng, Zhaopeng Meng, Boyan Li, Zhen Wang
In online adaptation phase, with the environment context inferred from few experiences collected in new environments, the policy is optimized by gradient ascent with respect to the PDVF.
no code implementations • CVPR 2022 • Christina Baek, Ziyang Wu, Kwan Ho Ryan Chan, Tianjiao Ding, Yi Ma, Benjamin D. Haeffele
The principle of Maximal Coding Rate Reduction (MCR$^2$) has recently been proposed as a training objective for learning discriminative low-dimensional structures intrinsic to high-dimensional data to allow for more robust training than standard approaches, such as cross-entropy minimization.
1 code implementation • 11 Feb 2022 • Yaodong Yu, Zitong Yang, Alexander Wei, Yi Ma, Jacob Steinhardt
Projection Norm first uses model predictions to pseudo-label test samples and then trains a new model on the pseudo-labels.
1 code implementation • 11 Feb 2022 • Shengbang Tong, Xili Dai, Ziyang Wu, Mingyang Li, Brent Yi, Yi Ma
Our method is simpler than existing approaches for incremental learning, and more efficient in terms of model size, storage, and computation: it requires only a single, fixed-capacity autoencoding network with a feature space that is used for both discriminative and generative purposes.
no code implementations • 19 Jan 2022 • Jinfei Wang, Yi Ma, Na Yi, Rahim Tafazolli, Fan Wang
Finally, it is shown that the network-ELAA can offer significant coverage extension (50% or more in most of cases) when comparing with the single-AP scenario.
no code implementations • 4 Jan 2022 • Yi Ma, Yongqi Zhai, Ronggang Wang
In this paper, we propose the first learned fine-grained scalable image compression model (DeepFGS) to overcome the above two shortcomings.
1 code implementation • CVPR 2022 • Xinyu Zhou, Peiqi Duan, Yi Ma, Boxin Shi
This paper proposes to use neuromorphic events for correcting rolling shutter (RS) images as consecutive global shutter (GS) frames.
no code implementations • NeurIPS 2021 • Yi Ma, Xiaotian Hao, Jianye Hao, Jiawen Lu, Xing Liu, Tong Xialiang, Mingxuan Yuan, Zhigang Li, Jie Tang, Zhaopeng Meng
To address this problem, existing methods partition the overall DPDP into fixed-size sub-problems by caching online generated orders and solve each sub-problem, or on this basis to utilize the predicted future orders to optimize each sub-problem further.
1 code implementation • 12 Nov 2021 • Xili Dai, Shengbang Tong, Mingyang Li, Ziyang Wu, Michael Psenka, Kwan Ho Ryan Chan, Pengyuan Zhai, Yaodong Yu, Xiaojun Yuan, Heung Yeung Shum, Yi Ma
In particular, we propose to learn a closed-loop transcription between a multi-class multi-dimensional data distribution and a linear discriminative representation (LDR) in the feature space that consists of multiple independent multi-dimensional linear subspaces.
1 code implementation • Physiological Measurement 2021 • Jizuo Li, Jiajun Yuan, Hansong Wang, Shijian Liu, Qianyu Guo, Yi Ma, Yongfu Li, Liebin Zhao, Guoxing Wang
We propose a deep learning architecture, LungAttn, which incorporates augmented attention convolution into ResNet block to improve the classification accuracy of lung sound.
1 code implementation • 3 Oct 2021 • Yi Ma, Kong Aik Lee, Ville Hautamaki, Haizhou Li
Speech enhancement aims to improve the perceptual quality of the speech signal by suppression of the background noise.
no code implementations • 3 Oct 2021 • Jinfei Wang, Yi Ma, Na Yi, Rahim Tafazolli, Zhibo Pang
In addition, a combinatorial approach of the MF beamforming and grouped space-time block code (G-STBC) is proposed to further mitigate the detrimental impact of the CSIT uncertainty.
no code implementations • 29 Sep 2021 • Yaodong Yu, Heinrich Jiang, Dara Bahri, Hossein Mobahi, Seungyeon Kim, Ankit Singh Rawat, Andreas Veit, Yi Ma
Concretely, we show that larger models and larger datasets need to be simultaneously leveraged to improve OOD performance.
1 code implementation • 8 Jul 2021 • Yuexiang Zhai, Christina Baek, Zhengyuan Zhou, Jiantao Jiao, Yi Ma
In both OWSP and OWMP settings, we demonstrate that adding {\em intermediate rewards} to subgoals is more computationally efficient than only rewarding the agent once it completes the goal of reaching a terminal state.
no code implementations • 30 Jun 2021 • Chris Junchi Li, Yaodong Yu, Nicolas Loizou, Gauthier Gidel, Yi Ma, Nicolas Le Roux, Michael I. Jordan
We study the stochastic bilinear minimax optimization problem, presenting an analysis of the same-sample Stochastic ExtraGradient (SEG) method with constant step size, and presenting variations of the method that yield favorable convergence.
no code implementations • CVPR 2021 • Peiqi Duan, Zihao W. Wang, Xinyu Zhou, Yi Ma, Boxin Shi
EventZoom is trained in a noise-to-noise fashion where the two ends of the network are unfiltered noisy events, enforcing noise-free event restoration.
1 code implementation • 18 Jun 2021 • Jiabao Lei, Kui Jia, Yi Ma
More specifically, we identify from the linear regions, partitioned by an MLP based implicit function, the analytic cells and analytic faces that are associated with the function's zero-level isosurface.
2 code implementations • 21 May 2021 • Kwan Ho Ryan Chan, Yaodong Yu, Chong You, Haozhi Qi, John Wright, Yi Ma
This work attempts to provide a plausible theoretical framework that aims to interpret modern deep (convolutional) networks from the principles of data compression and discriminative representation.
no code implementations • 4 May 2021 • Songyan Xue, Yi Ma, Na Yi
In this paper, a novel end-to-end learning approach, namely JTRD-Net, is proposed for uplink multiuser single-input multiple-output (MU-SIMO) joint transmitter and non-coherent receiver design (JTRD) in fading channels.
2 code implementations • 22 Apr 2021 • Xili Dai, Haigang Gong, Shuai Wu, Xiaojun Yuan, Yi Ma
We conduct extensive experiments and show that our method achieves a significantly better trade-off between efficiency and accuracy, resulting in a real-time line detector at up to 73 FPS on a single GPU.
Ranked #1 on
Line Segment Detection
on York Urban Dataset
(sAP5 metric)
2 code implementations • CVPR 2021 • Yichao Zhou, Shichen Liu, Yi Ma
Recent advances have shown that symmetry, a structural prior that most objects exhibit, can support a variety of single-view 3D understanding tasks.
1 code implementation • 16 Apr 2021 • Cheng Yang, Jia Zheng, Xili Dai, Rui Tang, Yi Ma, Xiaojun Yuan
Single-image room layout reconstruction aims to reconstruct the enclosed 3D structure of a room from a single image.
1 code implementation • 17 Mar 2021 • Yaodong Yu, Zitong Yang, Edgar Dobriban, Jacob Steinhardt, Yi Ma
To investigate this gap, we decompose the test risk into its bias and variance components and study their behavior as a function of adversarial training perturbation radii ($\varepsilon$).
no code implementations • 1 Jan 2021 • Yuexiang Zhai, Bai Jiang, Yi Ma, Hao Chen
Generative Adversarial Networks (GAN) are popular generative models of images.
no code implementations • NeurIPS 2020 • Chaobing Song, Zhengyuan Zhou, Yichao Zhou, Yong Jiang, Yi Ma
The optimization problems associated with training generative adversarial neural networks can be largely reduced to certain {\em non-monotone} variational inequality problems (VIPs), whereas existing convergence results are mostly based on monotone or strongly monotone assumptions.
no code implementations • CVPR 2021 • Ziyang Wu, Christina Baek, Chong You, Yi Ma
Current deep learning architectures suffer from catastrophic forgetting, a failure to retain knowledge of previously learned classes when incrementally trained on new classes.
3 code implementations • 27 Oct 2020 • Kwan Ho Ryan Chan, Yaodong Yu, Chong You, Haozhi Qi, John Wright, Yi Ma
The layered architectures, linear and nonlinear operators, and even parameters of the network are all explicitly constructed layer-by-layer in a forward propagation fashion by emulating the gradient scheme.
1 code implementation • 28 Sep 2020 • Yifei Huang, Yaodong Yu, Hongyang Zhang, Yi Ma, Yuan YAO
Even replacing only the first layer of a ResNet by such a ODE block can exhibit further improvement in robustness, e. g., under PGD-20 ($\ell_\infty=0. 031$) attack on CIFAR-10 dataset, it achieves 91. 57\% and natural accuracy and 62. 35\% robust accuracy, while a counterpart architecture of ResNet trained with TRADES achieves natural and robust accuracy 76. 29\% and 45. 24\%, respectively.
1 code implementation • 7 Aug 2020 • Yichao Zhou, Jingwei Huang, Xili Dai, Shichen Liu, Linjie Luo, Zhili Chen, Yi Ma
We present HoliCity, a city-scale 3D dataset with rich structural information.
1 code implementation • ICLR 2021 • Haozhi Qi, Xiaolong Wang, Deepak Pathak, Yi Ma, Jitendra Malik
Learning long-term dynamics models is the key to understanding physical common sense.
Ranked #1 on
Visual Reasoning
on PHYRE-1B-Within
no code implementations • Interspeech 2020 • Yi Ma, Xinzi Xu, Yongfu Li
An adventitious lung sound classification model, LungRN+NL, is proposed in this work, which has demonstrated a drastic improvement compared to our previous work and the state-of-the-art models.
Ranked #10 on
Audio Classification
on ICBHI Respiratory Sound Database
1 code implementation • CVPR 2018 • Kun Huang, Yifan Wang, Zihan Zhou, Tianjiao Ding, Shenghua Gao, Yi Ma
To this end, we have built a very large new dataset of over 5, 000 images with wireframes thoroughly labelled by humans.
1 code implementation • ICML 2020 • Haozhi Qi, Chong You, Xiaolong Wang, Yi Ma, Jitendra Malik
Initialization, normalization, and skip connections are believed to be three indispensable techniques for training very deep convolutional neural networks and obtaining state-of-the-art performance.
no code implementations • ICML 2020 • Xiaotian Hao, Zhaoqing Peng, Yi Ma, Guan Wang, Junqi Jin, Jianye Hao, Shan Chen, Rongquan Bai, Mingzhou Xie, Miao Xu, Zhenzhe Zheng, Chuan Yu, Han Li, Jian Xu, Kun Gai
In E-commerce, advertising is essential for merchants to reach their target users.
no code implementations • NeurIPS 2020 • Chaobing Song, Yong Jiang, Yi Ma
Meanwhile, VRADA matches the lower bound of the general convex setting up to a $\log\log n$ factor and matches the lower bounds in both regimes $n\le \Theta(\kappa)$ and $n\gg \kappa$ of the strongly convex setting, where $\kappa$ denotes the condition number.
2 code implementations • 17 Jun 2020 • Yichao Zhou, Shichen Liu, Yi Ma
In this work, we focus on object-level 3D reconstruction and present a geometry-based end-to-end deep learning framework that first detects the mirror plane of reflection symmetry that commonly exists in man-made objects and then predicts depth maps by finding the intra-image pixel-wise correspondence of the symmetry.
1 code implementation • NeurIPS 2020 • Chong You, Zhihui Zhu, Qing Qu, Yi Ma
This paper shows that with a double over-parameterization for both the low-rank matrix and sparse corruption, gradient descent with discrepant learning rates provably recovers the underlying matrix even without prior knowledge on neither rank of the matrix nor sparsity of the corruption.
2 code implementations • NeurIPS 2020 • Yaodong Yu, Kwan Ho Ryan Chan, Chong You, Chaobing Song, Yi Ma
To learn intrinsic low-dimensional structures from high-dimensional data that most discriminate between classes, we propose the principle of Maximal Coding Rate Reduction ($\text{MCR}^2$), an information-theoretic measure that maximizes the coding rate difference between the whole dataset and the sum of each individual class.
Ranked #12 on
Image Clustering
on STL-10
no code implementations • 9 May 2020 • Xiaotian Hao, Junqi Jin, Jianye Hao, Jin Li, Weixun Wang, Yi Ma, Zhenzhe Zheng, Han Li, Jian Xu, Kun Gai
Bipartite b-matching is fundamental in algorithm design, and has been widely applied into economic markets, labor markets, etc.
1 code implementation • ICLR 2020 • Yuexiang Zhai, Hermish Mehta, Zhengyuan Zhou, Yi Ma
Recently, the $\ell^4$-norm maximization has been proposed to solve the sparse dictionary learning (SDL) problem.
no code implementations • 14 Apr 2020 • Songyan Xue, Yi Ma, Na Yi, Rahim Tafazolli
Otherwise, it is called non-systematic waveform, where no artificial design is involved.
no code implementations • 1 Apr 2020 • Songyan Xue, Yi Ma, Na Yi, Terence E. Dodgson
Motivated by this finding, we propose a novel modular neural network based approach, termed MNNet, where the whole network is formed by a set of pre-defined ANN modules.
1 code implementation • ICML 2020 • Zitong Yang, Yaodong Yu, Chong You, Jacob Steinhardt, Yi Ma
We provide a simple explanation for this by measuring the bias and variance of neural networks: while the bias is monotonically decreasing as in the classical theory, the variance is unimodal or bell-shaped: it increases then decreases with the width of the network.
no code implementations • 18 Feb 2020 • Peng Zhang, Jianye Hao, Weixun Wang, Hongyao Tang, Yi Ma, Yihai Duan, Yan Zheng
Our framework consists of a fuzzy rule controller to represent human knowledge and a refine module to fine-tune suboptimal prior knowledge.
1 code implementation • IEEE Biomedical Circuits and Systems (BIOCAS) 2019 • Yi Ma, Xinzi Xu, Qing Yu, Yuhang Zhang, Yongfu Li, Jian Zhao and Guoxing Wang
Improving access to health care services for the medically under-served population is vital to ensure that critical illness can be addressed immediately.
Ranked #11 on
Audio Classification
on ICBHI Respiratory Sound Database
1 code implementation • NeurIPS 2019 • Yichao Zhou, Haozhi Qi, Jingwei Huang, Yi Ma
We present a simple yet effective end-to-end trainable deep network with geometry-inspired convolutional operators for detecting vanishing points in images.
no code implementations • 21 Jul 2019 • Yi Ma, Jianye Hao, Yaodong Yang, Han Li, Junqi Jin, Guangyong Chen
Our approach can work directly on directed graph data in semi-supervised nodes classification tasks.
no code implementations • 6 Jun 2019 • Yuexiang Zhai, Zitong Yang, Zhenyu Liao, John Wright, Yi Ma
Most existing methods solve the dictionary (and sparse representations) based on heuristic algorithms, usually without theoretical guarantees for either optimality or complexity.
no code implementations • 3 Jun 2019 • Chaobing Song, Yong Jiang, Yi Ma
In this general convex setting, we propose a concise unified acceleration framework (UAF), which reconciles the two different high-order acceleration approaches, one by Nesterov and Baes [29, 3, 33] and one by Monteiro and Svaiter [25].
2 code implementations • ICCV 2019 • Yichao Zhou, Haozhi Qi, Yuexiang Zhai, Qi Sun, Zhili Chen, Li-Yi Wei, Yi Ma
In this paper, we propose a method to obtain a compact and accurate 3D wireframe representation from a single image by effectively exploiting global structural regularities.
1 code implementation • ICCV 2019 • Yichao Zhou, Haozhi Qi, Yi Ma
We conduct extensive experiments and show that our method significantly outperforms the previous state-of-the-art wireframe and line extraction algorithms.
Ranked #3 on
Line Segment Detection
on wireframe dataset
(sAP15 metric)
no code implementations • EMNLP 2018 • Antoine Raux, Yi Ma, Paul Yang, Felicia Wong
This paper describes PizzaPal, a voice-only agent for ordering pizza, as well as the Conversational AI architecture built at b4. ai.
no code implementations • ECCV 2018 • Chen Zhu, Xiao Tan, Feng Zhou, Xiao Liu, Kaiyu Yue, Errui Ding, Yi Ma
Specifically, it firstly summarizes the video by weight-summing all feature vectors in the feature maps of selected frames with a spatio-temporal soft attention, and then predicts which channels to suppress or to enhance according to this summary with a learned non-linear transform.
Ranked #11 on
Action Recognition
on ActivityNet
1 code implementation • ICCV 2017 • Chen Zhu, Yanpeng Zhao, Shuaiyi Huang, Kewei Tu, Yi Ma
In this paper, we demonstrate the importance of encoding such relations by showing the limited effective receptive field of ResNet on two datasets, and propose to model the visual attention as a multivariate distribution over a grid-structured Conditional Random Field on image regions.
no code implementations • 8 Jul 2016 • Liansheng Zhuang, Zihan Zhou, Jingwen Yin, Shenghua Gao, Zhouchen Lin, Yi Ma, Nenghai Yu
In the literature, most existing graph-based semi-supervised learning (SSL) methods only use the label information of observed samples in the label propagation stage, while ignoring such valuable information when learning the graph.
5 code implementations • Conference 2016 • Yingying Zhang, Desen Zhou, Siqin Chen, Shenghua Gao, Yi Ma
To this end, we have proposed a simple but effective Multi-column Convolutional Neural Network (MCNN) architecture to map the image to its crowd density map.
Ranked #5 on
Crowd Counting
on Venice
no code implementations • ICCV 2015 • Weisheng Dong, Guangyu Li, Guangming Shi, Xin Li, Yi Ma
Patch-based low-rank models have shown effective in exploiting spatial redundancy of natural images especially for the application of image denoising.
no code implementations • 15 Jun 2015 • Qiaosong Wang, Haiting Lin, Yi Ma, Sing Bing Kang, Jingyi Yu
We propose a novel approach that jointly removes reflection or translucent layer from a scene and estimates scene depth.
no code implementations • CVPR 2015 • Xiaojie Guo, Yi Ma
In this paper, we propose a definition of Generalized Tensor Total Variation norm (GTV) that considers both the inhomogeneity and the multi-directionality of responses to derivative-like filters.
no code implementations • 20 Jan 2015 • Yuting Zhang, Kui Jia, Yueming Wang, Gang Pan, Tsung-Han Chan, Yi Ma
By assuming a human face as piece-wise planar surfaces, where each surface corresponds to a facial part, we develop in this paper a Constrained Part-based Alignment (CPA) algorithm for face recognition across pose and/or expression.
no code implementations • 3 Sep 2014 • Liansheng Zhuang, Shenghua Gao, Jinhui Tang, Jingjing Wang, Zhouchen Lin, Yi Ma
This paper aims at constructing a good graph for discovering intrinsic data structures in a semi-supervised learning setting.
no code implementations • CVPR 2014 • Xiaojie Guo, Xiaochun Cao, Yi Ma
When one records a video/image sequence through a transparent medium (e. g. glass), the image is often a superposition of a transmitted layer (scene behind the medium) and a reflected layer.
no code implementations • CVPR 2014 • Tianzhu Zhang, Kui Jia, Changsheng Xu, Yi Ma, Narendra Ahuja
The proposed part matching tracker (PMT) has a number of attractive properties.
2 code implementations • 14 Apr 2014 • Tsung-Han Chan, Kui Jia, Shenghua Gao, Jiwen Lu, Zinan Zeng, Yi Ma
In this work, we propose a very simple deep learning network for image classification which comprises only the very basic data processing components: cascaded principal component analysis (PCA), binary hashing, and block-wise histograms.
Ranked #46 on
Image Classification
on MNIST
no code implementations • 31 Mar 2014 • Kui Jia, Tsung-Han Chan, Zinan Zeng, Shenghua Gao, Gang Wang, Tianzhu Zhang, Yi Ma
The task is to identify the inlier features and establish their consistent correspondences across the image set.
no code implementations • 8 Feb 2014 • Liansheng Zhuang, Tsung-Han Chan, Allen Y. Yang, S. Shankar Sastry, Yi Ma
In particular, the single-sample face alignment accuracy is comparable to that of the well-known Deformable SRC algorithm using multiple gallery images per class.
no code implementations • NeurIPS 2013 • Xiaoqin Zhang, Di Wang, Zhengyuan Zhou, Yi Ma
In this context, the state-of-the-art algorithms RASL'' and "TILT'' can be viewed as two special cases of our work, and yet each only performs part of the function of our method."
no code implementations • CVPR 2013 • Xiaojie Guo, Xiaochun Cao, Xiaowu Chen, Yi Ma
Given an area of interest in a video sequence, one may want to manipulate or edit the area, e. g. remove occlusions from or replace with an advertisement on it.
no code implementations • CVPR 2013 • Liansheng Zhuang, Allen Y. Yang, Zihan Zhou, S. Shankar Sastry, Yi Ma
To compensate the missing illumination information typically provided by multiple training images, a sparse illumination transfer (SIT) technique is introduced.
no code implementations • CVPR 2013 • Zihan Zhou, Hailin Jin, Yi Ma
Recently, a new image deformation technique called content-preserving warping (CPW) has been successfully employed to produce the state-of-the-art video stabilization results in many challenging cases.
no code implementations • CVPR 2013 • Zinan Zeng, Shijie Xiao, Kui Jia, Tsung-Han Chan, Shenghua Gao, Dong Xu, Yi Ma
Our framework is motivated by the observation that samples from the same class repetitively appear in the collection of ambiguously labeled training images, while they are just ambiguously labeled in each image.
no code implementations • 10 Sep 2012 • Guangcan Liu, Shiyu Chang, Yi Ma
We show that the minimizer of this regularizer guarantees to give good approximation to the blur kernel if the original image is sharp enough.
no code implementations • 11 Apr 2012 • Lei Zhang, Meng Yang, Xiangchu Feng, Yi Ma, David Zhang
It is widely believed that the l1- norm sparsity constraint on coding coefficients plays a key role in the success of SRC, while its use of all training samples to collaboratively represent the query sample is rather ignored.
1 code implementation • 21 Feb 2012 • John Wright, Arvind Ganesh, Kerui Min, Yi Ma
We consider the problem of recovering a target matrix that is a superposition of low-rank and sparse components, from a small set of linear measurements.
Information Theory Information Theory
no code implementations • 3 Nov 2011 • John Wright, Arvind Ganesh, Allen Yang, Zihan Zhou, Yi Ma
This report concerns the use of techniques for sparse signal representation and sparse error correction for automatic face recognition.
1 code implementation • 14 Oct 2010 • Guangcan Liu, Zhouchen Lin, Shuicheng Yan, Ju Sun, Yong Yu, Yi Ma
In this work we address the subspace recovery problem.
no code implementations • 26 Sep 2010 • Zhouchen Lin, Minming Chen, Yi Ma
This paper proposes scalable and fast algorithms for solving the Robust PCA problem, namely recovering a low-rank matrix with an unknown fraction of its entries being arbitrarily corrupted.
Optimization and Control Numerical Analysis Systems and Control
no code implementations • 21 Jul 2010 • Allen Y. Yang, Zihan Zhou, Arvind Ganesh, S. Shankar Sastry, Yi Ma
L1-minimization refers to finding the minimum L1-norm solution to an underdetermined linear system b=Ax.
1 code implementation • 21 Jul 2010 • Allen Y. Yang, Zihan Zhou, Arvind Ganesh, S. Shankar Sastry, Yi Ma
L1-minimization refers to finding the minimum L1-norm solution to an underdetermined linear system b=Ax.
1 code implementation • 14 Jan 2010 • Zihan Zhou, XiaoDong Li, John Wright, Emmanuel Candes, Yi Ma
We further prove that the solution to a related convex program (a relaxed PCP) gives an estimate of the low-rank matrix that is simultaneously stable to small entrywise noise and robust to gross sparse errors.
Information Theory Information Theory
3 code implementations • 18 Dec 2009 • Emmanuel J. Candes, Xiao-Dong Li, Yi Ma, John Wright
This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted.
Information Theory Information Theory
no code implementations • NeurIPS 2009 • John Wright, Arvind Ganesh, Shankar Rao, Yigang Peng, Yi Ma
Principal component analysis is a fundamental operation in computational data analysis, with myriad applications ranging from web search to bioinformatics to computer vision and image analysis.