You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 12 Aug 2021 • Mengmeng Tian, Yuxin Chen, YuAn Liu, Zehui Xiong, Cyril Leung, Chunyan Miao

It is challenging to design proper incentives for the FL clients due to the fact that the task is privately trained by the clients.

no code implementations • 26 Jul 2021 • Yuling Yan, Yuxin Chen, Jianqing Fan

Particularly worth highlighting is the inference procedure built on top of $\textsf{HeteroPCA}$, which is not only valid but also statistically efficient for broader scenarios (e. g., it covers a wider range of missing rates and signal-to-noise ratios).

1 code implementation • 26 Jul 2021 • Yuxin Chen, Ziqi Zhang, Chunfeng Yuan, Bing Li, Ying Deng, Weiming Hu

Graph convolutional networks (GCNs) have been widely used and achieved remarkable results in skeleton-based action recognition.

Ranked #1 on Skeleton Based Action Recognition on NTU RGB+D 120

no code implementations • 24 May 2021 • Wenhao Zhan, Shicong Cen, Baihe Huang, Yuxin Chen, Jason D. Lee, Yuejie Chi

Policy optimization, which learns the policy of interest by maximizing the value function via large-scale optimization techniques, lies at the heart of modern reinforcement learning (RL).

no code implementations • 17 May 2021 • Gen Li, Yuxin Chen, Yuejie Chi, Yuantao Gu, Yuting Wei

The current paper pertains to a scenario with value-based linear representation, which postulates the linear realizability of the optimal Q-function (also called the "linear $Q^{\star}$ problem").

1 code implementation • 16 May 2021 • Ziyu Ye, Yuxin Chen, Haitao Zheng

We also provide an extensive empirical study on how a biased training anomaly set affects the anomaly score function and therefore the detection performance on different anomaly classes.

no code implementations • 14 Apr 2021 • Chinmaya Mahesh, Kristin Dona, David W. Miller, Yuxin Chen

Data-intensive science is increasingly reliant on real-time processing capabilities and machine learning workflows, in order to filter and analyze the extreme volumes of data being collected.

no code implementations • 7 Apr 2021 • Gen Li, Changxiao Cai, Yuantao Gu, H. Vincent Poor, Yuxin Chen

Eigenvector perturbation analysis plays a vital role in various statistical data science applications.

no code implementations • 22 Feb 2021 • Gen Li, Yuting Wei, Yuejie Chi, Yuantao Gu, Yuxin Chen

The softmax policy gradient (PG) method, which performs gradient ascent under softmax policy parameterization, is arguably one of the de facto implementations of policy optimization in modern reinforcement learning.

no code implementations • 12 Feb 2021 • Gen Li, Changxiao Cai, Yuxin Chen, Yuantao Gu, Yuting Wei, Yuejie Chi

Take a $\gamma$-discounted infinite-horizon MDP with state space $\mathcal{S}$ and action space $\mathcal{A}$: to yield an entrywise $\varepsilon$-accurate estimate of the optimal Q-function, state-of-the-art theory for Q-learning proves that a sample size on the order of $\frac{|\mathcal{S}||\mathcal{A}|}{(1-\gamma)^5\varepsilon^{2}}$ is sufficient, which, however, fails to match with the existing minimax lower bound.

no code implementations • ICLR 2021 • Ayya Alieva, Aiden Aceves, Jialin Song, Stephen Mayo, Yisong Yue, Yuxin Chen

In particular, we focus on a class of combinatorial problems that can be solved via submodular maximization (either directly on the objective function or via submodular surrogates).

1 code implementation • 1 Jan 2021 • Ziyu Ye, Yuxin Chen, Haitao Zheng

Given two different anomaly score functions, we formally define their difference in performance as the relative scoring bias of the anomaly detectors.

Semi-supervised Anomaly Detection Unsupervised Anomaly Detection

no code implementations • 1 Jan 2021 • Fengxue Zhang, Yair Altas, Louise Fan, Kaustubh Vinchure, Brian Nord, Yuxin Chen

To address this issue, we propose Collision-Free Latent Space Optimization (CoFLO), which employs a novel regularizer to reduce the collision in the learned latent space and encourage the mapping from the latent space to objective value to be Lipschitz continuous.

no code implementations • 15 Dec 2020 • Yuxin Chen, Yuejie Chi, Jianqing Fan, Cong Ma

While the studies of spectral methods can be traced back to classical matrix perturbation theory and methods of moments, the past decade has witnessed tremendous theoretical advances in demystifying their efficacy through the lens of statistical modeling, with the aid of non-asymptotic random matrix theory.

no code implementations • 27 Oct 2020 • Akash Kumar, Hanqi Zhang, Adish Singla, Yuxin Chen

As a warm-up, we show that the teaching complexity is $\Theta(d)$ for the exact teaching of linear perceptrons in $\mathbb{R}^d$, and $\Theta(d^k)$ for kernel perceptron with a polynomial kernel of order $k$.

no code implementations • 17 Oct 2020 • Farnam Mansouri, Yuxin Chen, Ara Vartanian, Xiaojin Zhu, Adish Singla

We analyze several properties of the teaching complexity parameter $TD(\sigma)$ associated with different families of the preference functions, e. g., comparison to the VC dimension of the hypothesis class and additivity/sub-additivity of $TD(\sigma)$ over disjoint domains.

no code implementations • 23 Sep 2020 • Yanxi Chen, Cong Ma, H. Vincent Poor, Yuxin Chen

We study the problem of learning mixtures of low-rank models, i. e. reconstructing multiple low-rank matrices from unlabelled linear measurements of each.

no code implementations • 20 Aug 2020 • Baihong Jin, Yingshui Tan, Albert Liu, Xiangyu Yue, Yuxin Chen, Alberto Sangiovanni Vincentelli

Incipient anomalies present milder symptoms compared to severe ones, and are more difficult to detect and diagnose due to their close resemblance to normal operating conditions.

no code implementations • 4 Aug 2020 • Yuxin Chen, Jianqing Fan, Bingyan Wang, Yuling Yan

We investigate the effectiveness of convex relaxation and nonconvex optimization in solving bilinear systems of equations under two different designs (i. e.$~$a sort of random Fourier design and Gaussian design).

no code implementations • 13 Jul 2020 • Shicong Cen, Chen Cheng, Yuxin Chen, Yuting Wei, Yuejie Chi

This class of methods is often applied in conjunction with entropy regularization -- an algorithmic scheme that encourages exploration -- and is closely related to soft policy iteration and trust region policy optimization.

no code implementations • 12 Jul 2020 • Yingshui Tan, Baihong Jin, Xiangyu Yue, Yuxin Chen, Alberto Sangiovanni Vincentelli

Ensemble learning is widely applied in Machine Learning (ML) to improve model performance and to mitigate decision risks.

no code implementations • 7 Jul 2020 • Baihong Jin, Yingshui Tan, Yuxin Chen, Kameshwar Poolla, Alberto Sangiovanni Vincentelli

Intermediate-Severity (IS) faults present milder symptoms compared to severe faults, and are more difficult to detect and diagnose due to their close resemblance to normal operating conditions.

no code implementations • 25 Jun 2020 • Akash Kumar, Adish Singla, Yisong Yue, Yuxin Chen

We investigate the average teaching complexity of the task, i. e., the minimal number of samples (halfspace queries) required by a teacher to help a version-space learner in locating a randomly selected target.

no code implementations • ICML 2020 • Changxiao Cai, H. Vincent Poor, Yuxin Chen

Furthermore, our findings unveil the statistical optimality of nonconvex tensor completion: it attains un-improvable $\ell_{2}$ accuracy -- including both the rates and the pre-constants -- when estimating both the unknown tensor and the underlying tensor factors.

no code implementations • NeurIPS 2020 • Gen Li, Yuting Wei, Yuejie Chi, Yuantao Gu, Yuxin Chen

Focusing on a $\gamma$-discounted MDP with state space $\mathcal{S}$ and action space $\mathcal{A}$, we demonstrate that the $\ell_{\infty}$-based sample complexity of classical asynchronous Q-learning --- namely, the number of samples needed to yield an entrywise $\varepsilon$-accurate estimate of the Q-function --- is at most on the order of $\frac{1}{\mu_{\min}(1-\gamma)^5\varepsilon^2}+ \frac{t_{mix}}{\mu_{\min}(1-\gamma)}$ up to some logarithmic factor, provided that a proper constant learning rate is adopted.

no code implementations • NeurIPS 2020 • Gen Li, Yuting Wei, Yuejie Chi, Yuantao Gu, Yuxin Chen

We investigate the sample efficiency of reinforcement learning in a $\gamma$-discounted infinite-horizon Markov decision process (MDP) with state space $\mathcal{S}$ and action space $\mathcal{A}$, assuming access to a generative model.

no code implementations • 21 Mar 2020 • Rati Devidze, Farnam Mansouri, Luis Haug, Yuxin Chen, Adish Singla

Machine teaching studies the interaction between a teacher and a student/learner where the teacher selects training examples for the learner to learn a specific task.

no code implementations • 3 Mar 2020 • Niklas Åkerblom, Yuxin Chen, Morteza Haghir Chehreghani

In order to learn the model parameters, we develop an online learning framework and investigate several exploration strategies such as Thompson Sampling and Upper Confidence Bound.

no code implementations • 17 Feb 2020 • Shi Yu, Yuxin Chen, Hussain Zaidi

Our main novel contribution is the discussion about uncertainty measure for BERT, where three different approaches are systematically compared on real problems.

no code implementations • 27 Jan 2020 • Zhe Xu, Yuxin Chen, Ufuk Topcu

In the context of teaching temporal logic formulas, an exhaustive search even for a myopic solution takes exponential time (with respect to the time span of the task).

no code implementations • 15 Jan 2020 • Yuxin Chen, Jianqing Fan, Cong Ma, Yuling Yan

This paper delivers improved theoretical guarantees for the convex programming approach in low-rank matrix estimation, in the presence of (1) random noise, (2) gross sparse outliers, and (3) missing data.

no code implementations • 14 Jan 2020 • Chen Cheng, Yuting Wei, Yuxin Chen

This paper aims to address two fundamental challenges arising in eigenvector estimation and inference for a low-rank matrix from noisy observations: (1) how to estimate an unknown eigenvector when the eigen-gap (i. e. the spacing between the associated eigenvalue and the rest of the spectrum) is particularly small; (2) how to perform estimation and inference on linear functionals of an eigenvector -- a sort of "fine-grained" statistical reasoning that goes far beyond the usual $\ell_2$ analysis.

no code implementations • NeurIPS 2019 • Changxiao Cai, Gen Li, H. Vincent Poor, Yuxin Chen

We study a noisy tensor completion problem of broad practical interest, namely, the reconstruction of a low-rank tensor from highly incomplete and randomly corrupted observations of its entries.

no code implementations • NeurIPS 2019 • Nikhil Ghosh, Yuxin Chen, Yisong Yue

In this paper, we aim to learn a low-dimensional Euclidean representation from a set of constraints of the form "item j is closer to item i than item k".

no code implementations • NeurIPS 2019 • Farnam Mansouri, Yuxin Chen, Ara Vartanian, Xiaojin Zhu, Adish Singla

In our framework, each function $\sigma \in \Sigma$ induces a teacher-learner pair with teaching complexity as $\TD(\sigma)$.

no code implementations • 9 Oct 2019 • Changxiao Cai, Gen Li, Yuejie Chi, H. Vincent Poor, Yuxin Chen

This paper is concerned with estimating the column space of an unknown low-rank matrix $\boldsymbol{A}^{\star}\in\mathbb{R}^{d_{1}\times d_{2}}$, given noisy and partial observations of its entries.

1 code implementation • 12 Sep 2019 • Boyue Li, Shicong Cen, Yuxin Chen, Yuejie Chi

There is growing interest in large-scale machine learning and optimization over decentralized networks, e. g. in the context of multi-agent learning and federated learning.

no code implementations • 10 Sep 2019 • Baihong Jin, Yingshui Tan, Yuxin Chen, Alberto Sangiovanni-Vincentelli

The Monte Carlo dropout method has proved to be a scalable and easy-to-use approach for estimating the uncertainty of deep neural network predictions.

no code implementations • 26 Jul 2019 • Baihong Jin, Yingshui Tan, Alexander Nettekoven, Yuxin Chen, Ufuk Topcu, Yisong Yue, Alberto Sangiovanni Vincentelli

We show that the encoder-decoder model is able to identify the injected anomalies in a modern manufacturing process in an unsupervised fashion.

no code implementations • 10 Jun 2019 • Yuxin Chen, Jianqing Fan, Cong Ma, Yuling Yan

As a byproduct, we obtain a sharp characterization of the estimation accuracy of our de-biased estimators, which, to the best of our knowledge, are the first tractable algorithms that provably achieve full statistical efficiency (including the preconstant).

no code implementations • 17 Apr 2019 • Kevin K. Yang, Yuxin Chen, Alycia Lee, Yisong Yue

Importantly, we show that our objective function can be efficiently decomposed as a difference of submodular functions (DS), which allows us to employ DS optimization tools to greedily identify sets of constraints that increase the likelihood of finding items with high utility.

no code implementations • 28 Mar 2019 • Tian Wang, Zichen Miao, Yuxin Chen, Yi Zhou, Guangcun Shan, Hichem Snoussi

It is challenging to detect the anomaly in crowded scenes for quite a long time.

no code implementations • 20 Feb 2019 • Yuxin Chen, Yuejie Chi, Jianqing Fan, Cong Ma, Yuling Yan

This paper studies noisy low-rank matrix completion: given partial and noisy entries of a large low-rank matrix, the goal is to estimate the underlying matrix faithfully and efficiently.

no code implementations • 18 Feb 2019 • Baihong Jin, Yuxin Chen, Dan Li, Kameshwar Poolla, Alberto Sangiovanni-Vincentelli

The One-Class Support Vector Machine (OC-SVM) is a popular machine learning model for anomaly detection and hence could be used for identifying change points; however, it is sometimes difficult to obtain a good OC-SVM model that can be used on sensor measurement time series to identify the change points in system health status.

no code implementations • 25 Dec 2018 • Yuxin Chen, Morteza Haghir Chehreghani

We propose a novel approach for trip prediction by analyzing user's trip histories.

no code implementations • 30 Nov 2018 • Yuxin Chen, Chen Cheng, Jianqing Fan

The aim is to estimate the leading eigenvalue and eigenvector of $\mathbf{M}^{\star}$.

no code implementations • 15 Nov 2018 • Jialin Song, Yury S. Tokpanov, Yuxin Chen, Dagny Fleischman, Kate T. Fountaine, Harry A. Atwater, Yisong Yue

We apply numerical methods in combination with finite-difference-time-domain (FDTD) simulations to optimize transmission properties of plasmonic mirror color filters using a multi-objective figure of merit over a five-dimensional parameter space by utilizing novel multi-fidelity Gaussian processes approach.

no code implementations • 2 Nov 2018 • Jialin Song, Yuxin Chen, Yisong Yue

How can we efficiently gather information to optimize an unknown function, when presented with multiple, mutually dependent information sources with different costs?

no code implementations • 1 Nov 2018 • Shuangting Liu, Jia-Qi Zhang, Yuxin Chen, Yifan Liu, Zengchang Qin, Tao Wan

Semantic segmentation is one of the basic topics in computer vision, it aims to assign semantic labels to every pixel of an image.

1 code implementation • 23 Oct 2018 • Yanzi Zhu, Zhujun Xiao, Yuxin Chen, Zhijing Li, Max Liu, Ben Y. Zhao, Haitao Zheng

Our work demonstrates a new set of silent reconnaissance attacks, which leverages the presence of commodity WiFi devices to track users inside private homes and offices, without compromising any WiFi network, data packets, or devices.

Cryptography and Security

no code implementations • 25 Sep 2018 • Yuejie Chi, Yue M. Lu, Yuxin Chen

Substantial progress has been made recently on developing provably accurate and efficient algorithms for low-rank matrix factorization via nonconvex optimization.

no code implementations • 22 Sep 2018 • Zhujun Xiao, Yanzi Zhu, Yuxin Chen, Ben Y. Zhao, Junchen Jiang, Hai-Tao Zheng

Build accurate DNN models requires training on large labeled, context specific datasets, especially those matching the target scenario.

no code implementations • ICML 2018 • Cong Ma, Kaizheng Wang, Yuejie Chi, Yuxin Chen

Focusing on two statistical estimation problems, i. e. solving random quadratic systems of equations and low-rank matrix completion, we establish that gradient descent achieves near-optimal statistical and computational guarantees without explicit regularization.

no code implementations • NeurIPS 2019 • Anette Hunziker, Yuxin Chen, Oisin Mac Aodha, Manuel Gomez Rodriguez, Andreas Krause, Pietro Perona, Yisong Yue, Adish Singla

Our framework is both generic, allowing the design of teaching schedules for different memory models, and also interactive, allowing the teacher to adapt the schedule to the underlying forgetting mechanisms of the learner.

no code implementations • 21 Mar 2018 • Yuxin Chen, Yuejie Chi, Jianqing Fan, Cong Ma

This paper considers the problem of solving systems of quadratic equations, namely, recovering an object of interest $\mathbf{x}^{\natural}\in\mathbb{R}^{n}$ from $m$ quadratic equations/samples $y_{i}=(\mathbf{a}_{i}^{\top}\mathbf{x}^{\natural})^{2}$, $1\leq i\leq m$.

no code implementations • CVPR 2018 • Oisin Mac Aodha, Shih-An Su, Yuxin Chen, Pietro Perona, Yisong Yue

We study the problem of computer-assisted teaching with explanations.

no code implementations • 17 Feb 2018 • Yuanxin Li, Cong Ma, Yuxin Chen, Yuejie Chi

We consider the problem of recovering low-rank matrices from random rank-one measurements, which spans numerous applications including covariance sketching, phase retrieval, quantum state tomography, and learning shallow polynomial neural networks, among others.

no code implementations • NeurIPS 2018 • Yuxin Chen, Adish Singla, Oisin Mac Aodha, Pietro Perona, Yisong Yue

We highlight that adaptivity does not speed up the teaching process when considering existing models of version space learners, such as "worst-case" (the learner picks the next hypothesis randomly from the version space) and "preference-based" (the learner picks hypothesis according to some global preference).

no code implementations • ICML 2018 • Cong Ma, Kaizheng Wang, Yuejie Chi, Yuxin Chen

Recent years have seen a flurry of activities in designing provably efficient nonconvex procedures for solving statistical estimation problems.

no code implementations • 31 Jul 2017 • Yuxin Chen, Jianqing Fan, Cong Ma, Kaizheng Wang

This paper is concerned with the problem of top-$K$ ranking from pairwise comparisons.

no code implementations • 5 Jun 2017 • Pragya Sur, Yuxin Chen, Emmanuel J. Candès

When used for the purpose of statistical inference, logistic models produce p-values for the regression coefficients by using an approximation to the distribution of the likelihood-ratio test.

no code implementations • 16 Mar 2017 • Yuxin Chen, Jean-Michel Renders, Morteza Haghir Chehreghani, Andreas Krause

We consider the optimal value of information (VoI) problem, where the goal is to sequentially select a set of tests with a minimal cost, so that one can efficiently make the best decision based on the observed outcomes.

no code implementations • 19 Sep 2016 • Yuxin Chen, Emmanuel Candes

We prove that for a broad class of statistical models, the proposed projected power method makes no error---and hence converges to the maximum likelihood estimate---in a suitable regime.

no code implementations • 24 May 2016 • Yuxin Chen, S. Hamed Hassani, Andreas Krause

We consider the Bayesian active learning and experimental design problem, where the goal is to learn the value of some unknown target variable through a sequence of informative, noisy tests.

no code implementations • 11 Feb 2016 • Yuxin Chen, Govinda Kamath, Changho Suh, David Tse

Motivated by applications in domains such as social networks and computational biology, we study the problem of community recovery in graphs with locality.

no code implementations • NeurIPS 2015 • Yuxin Chen, Emmanuel J. Candes

We complement our theoretical study with numerical examples showing that solving random quadratic systems is both computationally and statistically not much harder than solving linear systems of the same size---hence the title of this paper.

no code implementations • 27 Apr 2015 • Yuxin Chen, Changho Suh

To approach this minimax limit, we propose a nearly linear-time ranking scheme, called \emph{Spectral MLE}, that returns the indices of the top-$K$ items in accordance to a careful score estimate.

no code implementations • 6 Apr 2015 • Yuxin Chen, Changho Suh, Andrea J. Goldsmith

In particular, our results isolate a family of \emph{minimum} \emph{channel divergence measures} to characterize the degree of measurement corruption, which together with the size of the minimum cut of $\mathcal{G}$ dictates the feasibility of exact information recovery.

no code implementations • 19 May 2014 • Qixing Huang, Yuxin Chen, Leonidas Guibas

Maximum a posteriori (MAP) inference over discrete Markov random fields is a fundamental task spanning a wide spectrum of real-world applications, which is known to be NP-hard for general graphs.

no code implementations • 24 Feb 2014 • Shervin Javdani, Yuxin Chen, Amin Karbasi, Andreas Krause, J. Andrew Bagnell, Siddhartha Srinivasa

Instead of minimizing uncertainty per se, we consider a set of overlapping decision regions of these hypotheses.

no code implementations • 6 Feb 2014 • Yuxin Chen, Leonidas J. Guibas, Qi-Xing Huang

Joint matching over a collection of objects aims at aggregating information from a large collection of similar instances (e. g. images, graphs, shapes) to improve maps between pairs of them.

no code implementations • 2 Oct 2013 • Yuxin Chen, Yuejie Chi, Andrea Goldsmith

Our method admits universally accurate covariance estimation in the absence of noise, as soon as the number of measurements exceeds the information theoretic limits.

no code implementations • 30 Apr 2013 • Yuxin Chen, Yuejie Chi

The paper explores the problem of \emph{spectral compressed sensing}, which aims to recover a spectrally sparse signal from a small random subset of its $n$ time domain samples.

no code implementations • 16 Apr 2013 • Yuxin Chen, Yuejie Chi

The paper studies the problem of recovering a spectrally sparse object from a small number of time domain samples.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.