Search Results for author: Yuxin Chen

Found 74 papers, 5 papers with code

A Contract Theory based Incentive Mechanism for Federated Learning

no code implementations12 Aug 2021 Mengmeng Tian, Yuxin Chen, YuAn Liu, Zehui Xiong, Cyril Leung, Chunyan Miao

It is challenging to design proper incentives for the FL clients due to the fact that the task is privately trained by the clients.

Federated Learning

Inference for Heteroskedastic PCA with Missing Data

no code implementations26 Jul 2021 Yuling Yan, Yuxin Chen, Jianqing Fan

Particularly worth highlighting is the inference procedure built on top of $\textsf{HeteroPCA}$, which is not only valid but also statistically efficient for broader scenarios (e. g., it covers a wider range of missing rates and signal-to-noise ratios).

Policy Mirror Descent for Regularized Reinforcement Learning: A Generalized Framework with Linear Convergence

no code implementations24 May 2021 Wenhao Zhan, Shicong Cen, Baihe Huang, Yuxin Chen, Jason D. Lee, Yuejie Chi

Policy optimization, which learns the policy of interest by maximizing the value function via large-scale optimization techniques, lies at the heart of modern reinforcement learning (RL).

Sample-Efficient Reinforcement Learning Is Feasible for Linearly Realizable MDPs with Limited Revisiting

no code implementations17 May 2021 Gen Li, Yuxin Chen, Yuejie Chi, Yuantao Gu, Yuting Wei

The current paper pertains to a scenario with value-based linear representation, which postulates the linear realizability of the optimal Q-function (also called the "linear $Q^{\star}$ problem").

Understanding the Effect of Bias in Deep Anomaly Detection

1 code implementation16 May 2021 Ziyu Ye, Yuxin Chen, Haitao Zheng

We also provide an extensive empirical study on how a biased training anomaly set affects the anomaly score function and therefore the detection performance on different anomaly classes.

Anomaly Detection

Towards an Interpretable Data-driven Trigger System for High-throughput Physics Facilities

no code implementations14 Apr 2021 Chinmaya Mahesh, Kristin Dona, David W. Miller, Yuxin Chen

Data-intensive science is increasingly reliant on real-time processing capabilities and machine learning workflows, in order to filter and analyze the extreme volumes of data being collected.

Minimax Estimation of Linear Functions of Eigenvectors in the Face of Small Eigen-Gaps

no code implementations7 Apr 2021 Gen Li, Changxiao Cai, Yuantao Gu, H. Vincent Poor, Yuxin Chen

Eigenvector perturbation analysis plays a vital role in various statistical data science applications.

Denoising

Softmax Policy Gradient Methods Can Take Exponential Time to Converge

no code implementations22 Feb 2021 Gen Li, Yuting Wei, Yuejie Chi, Yuantao Gu, Yuxin Chen

The softmax policy gradient (PG) method, which performs gradient ascent under softmax policy parameterization, is arguably one of the de facto implementations of policy optimization in modern reinforcement learning.

Policy Gradient Methods

Is Q-Learning Minimax Optimal? A Tight Sample Complexity Analysis

no code implementations12 Feb 2021 Gen Li, Changxiao Cai, Yuxin Chen, Yuantao Gu, Yuting Wei, Yuejie Chi

Take a $\gamma$-discounted infinite-horizon MDP with state space $\mathcal{S}$ and action space $\mathcal{A}$: to yield an entrywise $\varepsilon$-accurate estimate of the optimal Q-function, state-of-the-art theory for Q-learning proves that a sample size on the order of $\frac{|\mathcal{S}||\mathcal{A}|}{(1-\gamma)^5\varepsilon^{2}}$ is sufficient, which, however, fails to match with the existing minimax lower bound.

Q-Learning

Learning to Make Decisions via Submodular Regularization

no code implementations ICLR 2021 Ayya Alieva, Aiden Aceves, Jialin Song, Stephen Mayo, Yisong Yue, Yuxin Chen

In particular, we focus on a class of combinatorial problems that can be solved via submodular maximization (either directly on the objective function or via submodular surrogates).

Active Learning Combinatorial Optimization +2

Understanding Bias in Anomaly Detection: A Semi-Supervised View with PAC Guarantees

1 code implementation1 Jan 2021 Ziyu Ye, Yuxin Chen, Haitao Zheng

Given two different anomaly score functions, we formally define their difference in performance as the relative scoring bias of the anomaly detectors.

Semi-supervised Anomaly Detection Unsupervised Anomaly Detection

Learning Collision-free Latent Space for Bayesian Optimization

no code implementations1 Jan 2021 Fengxue Zhang, Yair Altas, Louise Fan, Kaustubh Vinchure, Brian Nord, Yuxin Chen

To address this issue, we propose Collision-Free Latent Space Optimization (CoFLO), which employs a novel regularizer to reduce the collision in the learned latent space and encourage the mapping from the latent space to objective value to be Lipschitz continuous.

Experimental Design

Spectral Methods for Data Science: A Statistical Perspective

no code implementations15 Dec 2020 Yuxin Chen, Yuejie Chi, Jianqing Fan, Cong Ma

While the studies of spectral methods can be traced back to classical matrix perturbation theory and methods of moments, the past decade has witnessed tremendous theoretical advances in demystifying their efficacy through the lens of statistical modeling, with the aid of non-asymptotic random matrix theory.

The Teaching Dimension of Kernel Perceptron

no code implementations27 Oct 2020 Akash Kumar, Hanqi Zhang, Adish Singla, Yuxin Chen

As a warm-up, we show that the teaching complexity is $\Theta(d)$ for the exact teaching of linear perceptrons in $\mathbb{R}^d$, and $\Theta(d^k)$ for kernel perceptron with a polynomial kernel of order $k$.

Preference-Based Batch and Sequential Teaching

no code implementations17 Oct 2020 Farnam Mansouri, Yuxin Chen, Ara Vartanian, Xiaojin Zhu, Adish Singla

We analyze several properties of the teaching complexity parameter $TD(\sigma)$ associated with different families of the preference functions, e. g., comparison to the VC dimension of the hypothesis class and additivity/sub-additivity of $TD(\sigma)$ over disjoint domains.

Learning Mixtures of Low-Rank Models

no code implementations23 Sep 2020 Yanxi Chen, Cong Ma, H. Vincent Poor, Yuxin Chen

We study the problem of learning mixtures of low-rank models, i. e. reconstructing multiple low-rank matrices from unlabelled linear measurements of each.

Using Ensemble Classifiers to Detect Incipient Anomalies

no code implementations20 Aug 2020 Baihong Jin, Yingshui Tan, Albert Liu, Xiangyu Yue, Yuxin Chen, Alberto Sangiovanni Vincentelli

Incipient anomalies present milder symptoms compared to severe ones, and are more difficult to detect and diagnose due to their close resemblance to normal operating conditions.

Anomaly Detection Ensemble Learning

Convex and Nonconvex Optimization Are Both Minimax-Optimal for Noisy Blind Deconvolution under Random Designs

no code implementations4 Aug 2020 Yuxin Chen, Jianqing Fan, Bingyan Wang, Yuling Yan

We investigate the effectiveness of convex relaxation and nonconvex optimization in solving bilinear systems of equations under two different designs (i. e.$~$a sort of random Fourier design and Gaussian design).

Fast Global Convergence of Natural Policy Gradient Methods with Entropy Regularization

no code implementations13 Jul 2020 Shicong Cen, Chen Cheng, Yuxin Chen, Yuting Wei, Yuejie Chi

This class of methods is often applied in conjunction with entropy regularization -- an algorithmic scheme that encourages exploration -- and is closely related to soft policy iteration and trust region policy optimization.

Policy Gradient Methods

Are Ensemble Classifiers Powerful Enough for the Detection and Diagnosis of Intermediate-Severity Faults?

no code implementations7 Jul 2020 Baihong Jin, Yingshui Tan, Yuxin Chen, Kameshwar Poolla, Alberto Sangiovanni Vincentelli

Intermediate-Severity (IS) faults present milder symptoms compared to severe faults, and are more difficult to detect and diagnose due to their close resemblance to normal operating conditions.

Fault Detection

Average-case Complexity of Teaching Convex Polytopes via Halfspace Queries

no code implementations25 Jun 2020 Akash Kumar, Adish Singla, Yisong Yue, Yuxin Chen

We investigate the average teaching complexity of the task, i. e., the minimal number of samples (halfspace queries) required by a teacher to help a version-space learner in locating a randomly selected target.

Uncertainty quantification for nonconvex tensor completion: Confidence intervals, heteroscedasticity and optimality

no code implementations ICML 2020 Changxiao Cai, H. Vincent Poor, Yuxin Chen

Furthermore, our findings unveil the statistical optimality of nonconvex tensor completion: it attains un-improvable $\ell_{2}$ accuracy -- including both the rates and the pre-constants -- when estimating both the unknown tensor and the underlying tensor factors.

Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and Variance Reduction

no code implementations NeurIPS 2020 Gen Li, Yuting Wei, Yuejie Chi, Yuantao Gu, Yuxin Chen

Focusing on a $\gamma$-discounted MDP with state space $\mathcal{S}$ and action space $\mathcal{A}$, we demonstrate that the $\ell_{\infty}$-based sample complexity of classical asynchronous Q-learning --- namely, the number of samples needed to yield an entrywise $\varepsilon$-accurate estimate of the Q-function --- is at most on the order of $\frac{1}{\mu_{\min}(1-\gamma)^5\varepsilon^2}+ \frac{t_{mix}}{\mu_{\min}(1-\gamma)}$ up to some logarithmic factor, provided that a proper constant learning rate is adopted.

Q-Learning

Breaking the Sample Size Barrier in Model-Based Reinforcement Learning with a Generative Model

no code implementations NeurIPS 2020 Gen Li, Yuting Wei, Yuejie Chi, Yuantao Gu, Yuxin Chen

We investigate the sample efficiency of reinforcement learning in a $\gamma$-discounted infinite-horizon Markov decision process (MDP) with state space $\mathcal{S}$ and action space $\mathcal{A}$, assuming access to a generative model.

Model-based Reinforcement Learning

Understanding the Power and Limitations of Teaching with Imperfect Knowledge

no code implementations21 Mar 2020 Rati Devidze, Farnam Mansouri, Luis Haug, Yuxin Chen, Adish Singla

Machine teaching studies the interaction between a teacher and a student/learner where the teacher selects training examples for the learner to learn a specific task.

An Online Learning Framework for Energy-Efficient Navigation of Electric Vehicles

no code implementations3 Mar 2020 Niklas Åkerblom, Yuxin Chen, Morteza Haghir Chehreghani

In order to learn the model parameters, we develop an online learning framework and investigate several exploration strategies such as Thompson Sampling and Upper Confidence Bound.

A Financial Service Chatbot based on Deep Bidirectional Transformers

no code implementations17 Feb 2020 Shi Yu, Yuxin Chen, Hussain Zaidi

Our main novel contribution is the discussion about uncertainty measure for BERT, where three different approaches are systematically compared on real problems.

Chatbot Intent Classification +2

Adaptive Teaching of Temporal Logic Formulas to Learners with Preferences

no code implementations27 Jan 2020 Zhe Xu, Yuxin Chen, Ufuk Topcu

In the context of teaching temporal logic formulas, an exhaustive search even for a myopic solution takes exponential time (with respect to the time span of the task).

Temporal Logic

Bridging Convex and Nonconvex Optimization in Robust PCA: Noise, Outliers, and Missing Data

no code implementations15 Jan 2020 Yuxin Chen, Jianqing Fan, Cong Ma, Yuling Yan

This paper delivers improved theoretical guarantees for the convex programming approach in low-rank matrix estimation, in the presence of (1) random noise, (2) gross sparse outliers, and (3) missing data.

Tackling small eigen-gaps: Fine-grained eigenvector estimation and inference under heteroscedastic noise

no code implementations14 Jan 2020 Chen Cheng, Yuting Wei, Yuxin Chen

This paper aims to address two fundamental challenges arising in eigenvector estimation and inference for a low-rank matrix from noisy observations: (1) how to estimate an unknown eigenvector when the eigen-gap (i. e. the spacing between the associated eigenvalue and the rest of the spectrum) is particularly small; (2) how to perform estimation and inference on linear functionals of an eigenvector -- a sort of "fine-grained" statistical reasoning that goes far beyond the usual $\ell_2$ analysis.

Nonconvex Low-Rank Tensor Completion from Noisy Data

no code implementations NeurIPS 2019 Changxiao Cai, Gen Li, H. Vincent Poor, Yuxin Chen

We study a noisy tensor completion problem of broad practical interest, namely, the reconstruction of a low-rank tensor from highly incomplete and randomly corrupted observations of its entries.

Landmark Ordinal Embedding

no code implementations NeurIPS 2019 Nikhil Ghosh, Yuxin Chen, Yisong Yue

In this paper, we aim to learn a low-dimensional Euclidean representation from a set of constraints of the form "item j is closer to item i than item k".

Preference-Based Batch and Sequential Teaching: Towards a Unified View of Models

no code implementations NeurIPS 2019 Farnam Mansouri, Yuxin Chen, Ara Vartanian, Xiaojin Zhu, Adish Singla

In our framework, each function $\sigma \in \Sigma$ induces a teacher-learner pair with teaching complexity as $\TD(\sigma)$.

Subspace Estimation from Unbalanced and Incomplete Data Matrices: $\ell_{2,\infty}$ Statistical Guarantees

no code implementations9 Oct 2019 Changxiao Cai, Gen Li, Yuejie Chi, H. Vincent Poor, Yuxin Chen

This paper is concerned with estimating the column space of an unknown low-rank matrix $\boldsymbol{A}^{\star}\in\mathbb{R}^{d_{1}\times d_{2}}$, given noisy and partial observations of its entries.

Communication-Efficient Distributed Optimization in Networks with Gradient Tracking and Variance Reduction

1 code implementation12 Sep 2019 Boyue Li, Shicong Cen, Yuxin Chen, Yuejie Chi

There is growing interest in large-scale machine learning and optimization over decentralized networks, e. g. in the context of multi-agent learning and federated learning.

Distributed Optimization Federated Learning

An Encoder-Decoder Based Approach for Anomaly Detection with Application in Additive Manufacturing

no code implementations26 Jul 2019 Baihong Jin, Yingshui Tan, Alexander Nettekoven, Yuxin Chen, Ufuk Topcu, Yisong Yue, Alberto Sangiovanni Vincentelli

We show that the encoder-decoder model is able to identify the injected anomalies in a modern manufacturing process in an unsupervised fashion.

Anomaly Detection

Inference and Uncertainty Quantification for Noisy Matrix Completion

no code implementations10 Jun 2019 Yuxin Chen, Jianqing Fan, Cong Ma, Yuling Yan

As a byproduct, we obtain a sharp characterization of the estimation accuracy of our de-biased estimators, which, to the best of our knowledge, are the first tractable algorithms that provably achieve full statistical efficiency (including the preconstant).

Matrix Completion

Batched Stochastic Bayesian Optimization via Combinatorial Constraints Design

no code implementations17 Apr 2019 Kevin K. Yang, Yuxin Chen, Alycia Lee, Yisong Yue

Importantly, we show that our objective function can be efficiently decomposed as a difference of submodular functions (DS), which allows us to employ DS optimization tools to greedily identify sets of constraints that increase the likelihood of finding items with high utility.

Experimental Design

Noisy Matrix Completion: Understanding Statistical Guarantees for Convex Relaxation via Nonconvex Optimization

no code implementations20 Feb 2019 Yuxin Chen, Yuejie Chi, Jianqing Fan, Cong Ma, Yuling Yan

This paper studies noisy low-rank matrix completion: given partial and noisy entries of a large low-rank matrix, the goal is to estimate the underlying matrix faithfully and efficiently.

Low-Rank Matrix Completion

A One-Class Support Vector Machine Calibration Method for Time Series Change Point Detection

no code implementations18 Feb 2019 Baihong Jin, Yuxin Chen, Dan Li, Kameshwar Poolla, Alberto Sangiovanni-Vincentelli

The One-Class Support Vector Machine (OC-SVM) is a popular machine learning model for anomaly detection and hence could be used for identifying change points; however, it is sometimes difficult to obtain a good OC-SVM model that can be used on sensor measurement time series to identify the change points in system health status.

Anomaly Detection Change Point Detection +1

Trip Prediction by Leveraging Trip Histories from Neighboring Users

no code implementations25 Dec 2018 Yuxin Chen, Morteza Haghir Chehreghani

We propose a novel approach for trip prediction by analyzing user's trip histories.

Asymmetry Helps: Eigenvalue and Eigenvector Analyses of Asymmetrically Perturbed Low-Rank Matrices

no code implementations30 Nov 2018 Yuxin Chen, Chen Cheng, Jianqing Fan

The aim is to estimate the leading eigenvalue and eigenvector of $\mathbf{M}^{\star}$.

Optimizing Photonic Nanostructures via Multi-fidelity Gaussian Processes

no code implementations15 Nov 2018 Jialin Song, Yury S. Tokpanov, Yuxin Chen, Dagny Fleischman, Kate T. Fountaine, Harry A. Atwater, Yisong Yue

We apply numerical methods in combination with finite-difference-time-domain (FDTD) simulations to optimize transmission properties of plasmonic mirror color filters using a multi-objective figure of merit over a five-dimensional parameter space by utilizing novel multi-fidelity Gaussian processes approach.

Gaussian Processes

A General Framework for Multi-fidelity Bayesian Optimization with Gaussian Processes

no code implementations2 Nov 2018 Jialin Song, Yuxin Chen, Yisong Yue

How can we efficiently gather information to optimize an unknown function, when presented with multiple, mutually dependent information sources with different costs?

Gaussian Processes Global Optimization

Et Tu Alexa? When Commodity WiFi Devices Turn into Adversarial Motion Sensors

1 code implementation23 Oct 2018 Yanzi Zhu, Zhujun Xiao, Yuxin Chen, Zhijing Li, Max Liu, Ben Y. Zhao, Haitao Zheng

Our work demonstrates a new set of silent reconnaissance attacks, which leverages the presence of commodity WiFi devices to track users inside private homes and offices, without compromising any WiFi network, data packets, or devices.

Cryptography and Security

Nonconvex Optimization Meets Low-Rank Matrix Factorization: An Overview

no code implementations25 Sep 2018 Yuejie Chi, Yue M. Lu, Yuxin Chen

Substantial progress has been made recently on developing provably accurate and efficient algorithms for low-rank matrix factorization via nonconvex optimization.

Matrix Completion

Addressing Training Bias via Automated Image Annotation

no code implementations22 Sep 2018 Zhujun Xiao, Yanzi Zhu, Yuxin Chen, Ben Y. Zhao, Junchen Jiang, Hai-Tao Zheng

Build accurate DNN models requires training on large labeled, context specific datasets, especially those matching the target scenario.

Implicit Regularization in Nonconvex Statistical Estimation: Gradient Descent Converges Linearly for Phase Retrieval and Matrix Completion

no code implementations ICML 2018 Cong Ma, Kaizheng Wang, Yuejie Chi, Yuxin Chen

Focusing on two statistical estimation problems, i. e. solving random quadratic systems of equations and low-rank matrix completion, we establish that gradient descent achieves near-optimal statistical and computational guarantees without explicit regularization.

Low-Rank Matrix Completion

Teaching Multiple Concepts to a Forgetful Learner

no code implementations NeurIPS 2019 Anette Hunziker, Yuxin Chen, Oisin Mac Aodha, Manuel Gomez Rodriguez, Andreas Krause, Pietro Perona, Yisong Yue, Adish Singla

Our framework is both generic, allowing the design of teaching schedules for different memory models, and also interactive, allowing the teacher to adapt the schedule to the underlying forgetting mechanisms of the learner.

Gradient Descent with Random Initialization: Fast Global Convergence for Nonconvex Phase Retrieval

no code implementations21 Mar 2018 Yuxin Chen, Yuejie Chi, Jianqing Fan, Cong Ma

This paper considers the problem of solving systems of quadratic equations, namely, recovering an object of interest $\mathbf{x}^{\natural}\in\mathbb{R}^{n}$ from $m$ quadratic equations/samples $y_{i}=(\mathbf{a}_{i}^{\top}\mathbf{x}^{\natural})^{2}$, $1\leq i\leq m$.

Nonconvex Matrix Factorization from Rank-One Measurements

no code implementations17 Feb 2018 Yuanxin Li, Cong Ma, Yuxin Chen, Yuejie Chi

We consider the problem of recovering low-rank matrices from random rank-one measurements, which spans numerous applications including covariance sketching, phase retrieval, quantum state tomography, and learning shallow polynomial neural networks, among others.

Quantum State Tomography

Understanding the Role of Adaptivity in Machine Teaching: The Case of Version Space Learners

no code implementations NeurIPS 2018 Yuxin Chen, Adish Singla, Oisin Mac Aodha, Pietro Perona, Yisong Yue

We highlight that adaptivity does not speed up the teaching process when considering existing models of version space learners, such as "worst-case" (the learner picks the next hypothesis randomly from the version space) and "preference-based" (the learner picks hypothesis according to some global preference).

Spectral Method and Regularized MLE Are Both Optimal for Top-$K$ Ranking

no code implementations31 Jul 2017 Yuxin Chen, Jianqing Fan, Cong Ma, Kaizheng Wang

This paper is concerned with the problem of top-$K$ ranking from pairwise comparisons.

The Likelihood Ratio Test in High-Dimensional Logistic Regression Is Asymptotically a Rescaled Chi-Square

no code implementations5 Jun 2017 Pragya Sur, Yuxin Chen, Emmanuel J. Candès

When used for the purpose of statistical inference, logistic models produce p-values for the regression coefficients by using an approximation to the distribution of the likelihood-ratio test.

Efficient Online Learning for Optimizing Value of Information: Theory and Application to Interactive Troubleshooting

no code implementations16 Mar 2017 Yuxin Chen, Jean-Michel Renders, Morteza Haghir Chehreghani, Andreas Krause

We consider the optimal value of information (VoI) problem, where the goal is to sequentially select a set of tests with a minimal cost, so that one can efficiently make the best decision based on the observed outcomes.

The Projected Power Method: An Efficient Algorithm for Joint Alignment from Pairwise Differences

no code implementations19 Sep 2016 Yuxin Chen, Emmanuel Candes

We prove that for a broad class of statistical models, the proposed projected power method makes no error---and hence converges to the maximum likelihood estimate---in a suitable regime.

Near-optimal Bayesian Active Learning with Correlated and Noisy Tests

no code implementations24 May 2016 Yuxin Chen, S. Hamed Hassani, Andreas Krause

We consider the Bayesian active learning and experimental design problem, where the goal is to learn the value of some unknown target variable through a sequence of informative, noisy tests.

Active Learning Experimental Design

Community Recovery in Graphs with Locality

no code implementations11 Feb 2016 Yuxin Chen, Govinda Kamath, Changho Suh, David Tse

Motivated by applications in domains such as social networks and computational biology, we study the problem of community recovery in graphs with locality.

Solving Random Quadratic Systems of Equations Is Nearly as Easy as Solving Linear Systems

no code implementations NeurIPS 2015 Yuxin Chen, Emmanuel J. Candes

We complement our theoretical study with numerical examples showing that solving random quadratic systems is both computationally and statistically not much harder than solving linear systems of the same size---hence the title of this paper.

Spectral MLE: Top-$K$ Rank Aggregation from Pairwise Comparisons

no code implementations27 Apr 2015 Yuxin Chen, Changho Suh

To approach this minimax limit, we propose a nearly linear-time ranking scheme, called \emph{Spectral MLE}, that returns the indices of the top-$K$ items in accordance to a careful score estimate.

Information Recovery from Pairwise Measurements

no code implementations6 Apr 2015 Yuxin Chen, Changho Suh, Andrea J. Goldsmith

In particular, our results isolate a family of \emph{minimum} \emph{channel divergence measures} to characterize the degree of measurement corruption, which together with the size of the minimum cut of $\mathcal{G}$ dictates the feasibility of exact information recovery.

Stochastic Block Model

Scalable Semidefinite Relaxation for Maximum A Posterior Estimation

no code implementations19 May 2014 Qixing Huang, Yuxin Chen, Leonidas Guibas

Maximum a posteriori (MAP) inference over discrete Markov random fields is a fundamental task spanning a wide spectrum of real-world applications, which is known to be NP-hard for general graphs.

Near-Optimal Joint Object Matching via Convex Relaxation

no code implementations6 Feb 2014 Yuxin Chen, Leonidas J. Guibas, Qi-Xing Huang

Joint matching over a collection of objects aims at aggregating information from a large collection of similar instances (e. g. images, graphs, shapes) to improve maps between pairs of them.

Exact and Stable Covariance Estimation from Quadratic Sampling via Convex Programming

no code implementations2 Oct 2013 Yuxin Chen, Yuejie Chi, Andrea Goldsmith

Our method admits universally accurate covariance estimation in the absence of noise, as soon as the number of measurements exceeds the information theoretic limits.

Robust Spectral Compressed Sensing via Structured Matrix Completion

no code implementations30 Apr 2013 Yuxin Chen, Yuejie Chi

The paper explores the problem of \emph{spectral compressed sensing}, which aims to recover a spectrally sparse signal from a small random subset of its $n$ time domain samples.

Matrix Completion Super-Resolution

Spectral Compressed Sensing via Structured Matrix Completion

no code implementations16 Apr 2013 Yuxin Chen, Yuejie Chi

The paper studies the problem of recovering a spectrally sparse object from a small number of time domain samples.

Matrix Completion Super-Resolution

Cannot find the paper you are looking for? You can Submit a new open access paper.