Search Results for author: Yu-Xiang Wang

Found 86 papers, 23 papers with code

Generalized PTR: User-Friendly Recipes for Data-Adaptive Algorithms with Differential Privacy

no code implementations31 Dec 2022 Rachel Redberg, Yuqing Zhu, Yu-Xiang Wang

The ''Propose-Test-Release'' (PTR) framework is a classic recipe for designing differentially private (DP) algorithms that are data-adaptive, i. e. those that add less noise when the input dataset is nice.

regression

Near-Optimal Differentially Private Reinforcement Learning

no code implementations9 Dec 2022 Dan Qiao, Yu-Xiang Wang

We close this gap for the JDP case by designing an $\epsilon$-JDP algorithm with a regret of $\widetilde{O}(\sqrt{SAH^2T}+S^2AH^3/\epsilon)$ which matches the information-theoretic lower bound of non-private learning for all choices of $\epsilon> S^{1. 5}A^{0. 5} H^2/\sqrt{T}$.

reinforcement-learning reinforcement Learning

Offline Reinforcement Learning with Closed-Form Policy Improvement Operators

no code implementations29 Nov 2022 Jiachen Li, Edwin Zhang, Ming Yin, Qinxun Bai, Yu-Xiang Wang, William Yang Wang

Behavior constrained policy optimization has been demonstrated to be a successful paradigm for tackling Offline Reinforcement Learning.

D4RL Offline RL +2

Global Optimization with Parametric Function Approximation

no code implementations16 Nov 2022 Chong Liu, Yu-Xiang Wang

We consider the problem of global optimization with noisy zeroth order oracles - a well-motivated problem useful for various applications ranging from hyper-parameter tuning for deep learning to new material design.

Gaussian Processes

Near-Optimal Deployment Efficiency in Reward-Free Reinforcement Learning with Linear Function Approximation

no code implementations3 Oct 2022 Dan Qiao, Yu-Xiang Wang

We study the problem of deployment efficient reinforcement learning (RL) with linear function approximation under the \emph{reward-free} exploration setting.

reinforcement-learning reinforcement Learning

Offline Reinforcement Learning with Differentiable Function Approximation is Provably Efficient

no code implementations3 Oct 2022 Ming Yin, Mengdi Wang, Yu-Xiang Wang

Offline reinforcement learning, which aims at optimizing sequential decision-making strategies with historical data, has been extensively applied in real-life applications.

Decision Making Offline RL +3

Differentially Private Bias-Term only Fine-tuning of Foundation Models

1 code implementation30 Sep 2022 Zhiqi Bu, Yu-Xiang Wang, Sheng Zha, George Karypis

We study the problem of differentially private (DP) fine-tuning of large pre-trained models -- a recent privacy-preserving approach suitable for solving downstream tasks with sensitive data.

Privacy Preserving

Differentially Private Optimization on Large Model at Small Cost

1 code implementation30 Sep 2022 Zhiqi Bu, Yu-Xiang Wang, Sheng Zha, George Karypis

Our implementation achieves state-of-the-art (SOTA) accuracy with very small extra cost: on GPT2 and at the same memory cost, BK has 1. 0$\times$ the time complexity of the standard training (0. 75$\times$ training speed in practice), and 0. 6$\times$ the time complexity of the most efficient DP implementation (1. 24$\times$ training speed in practice).

Privacy Preserving

Doubly Fair Dynamic Pricing

no code implementations23 Sep 2022 Jianyu Xu, Dan Qiao, Yu-Xiang Wang

We show that a doubly fair policy must be random to have higher revenue than the best trivial policy that assigns the same price to different groups.

Fairness

Optimal Dynamic Regret in LQR Control

no code implementations18 Jun 2022 Dheeraj Baby, Yu-Xiang Wang

We consider the problem of nonstochastic control with a sequence of quadratic losses, i. e., LQR control.

Automatic Clipping: Differentially Private Deep Learning Made Easier and Stronger

1 code implementation14 Jun 2022 Zhiqi Bu, Yu-Xiang Wang, Sheng Zha, George Karypis

Per-example gradient clipping is a key algorithmic step that enables practical differential private (DP) training for deep learning models.

Why Quantization Improves Generalization: NTK of Binary Weight Neural Networks

no code implementations13 Jun 2022 Kaiqi Zhang, Ming Yin, Yu-Xiang Wang

We propose a quasi neural network to approximate the distribution propagation, which is a neural network with continuous parameters and smooth activation function.

Quantization

Offline Stochastic Shortest Path: Learning, Evaluation and Towards Optimality

no code implementations10 Jun 2022 Ming Yin, Wenjing Chen, Mengdi Wang, Yu-Xiang Wang

Goal-oriented Reinforcement Learning, where the agent needs to reach the goal state while simultaneously minimizing the cost, has received significant attention in real-world applications.

Offline Reinforcement Learning with Differential Privacy

no code implementations2 Jun 2022 Dan Qiao, Yu-Xiang Wang

The offline reinforcement learning (RL) problem is often motivated by the need to learn data-driven decision policies in financial, legal and healthcare applications.

Offline RL reinforcement-learning +1

Second Order Path Variationals in Non-Stationary Online Learning

no code implementations4 May 2022 Dheeraj Baby, Yu-Xiang Wang

We consider the problem of universal dynamic regret minimization under exp-concave and smooth losses.

Provably Confidential Language Modelling

1 code implementation NAACL 2022 Xuandong Zhao, Lei LI, Yu-Xiang Wang

Large language models are shown to memorize privacy information such as social security numbers in training data.

Language Modelling Memorization +1

Deep Learning meets Nonparametric Regression: Are Weight-Decayed DNNs Locally Adaptive?

no code implementations20 Apr 2022 Kaiqi Zhang, Yu-Xiang Wang

We consider a "Parallel NN" variant of deep ReLU networks and show that the standard weight decay is equivalent to promoting the $\ell_p$-sparsity ($0<p<1$) of the coefficient vector of an end-to-end learned function bases, i. e., a dictionary.

regression

Towards Differential Relational Privacy and its use in Question Answering

no code implementations30 Mar 2022 Simone Bombari, Alessandro Achille, Zijian Wang, Yu-Xiang Wang, Yusheng Xie, Kunwar Yashraj Singh, Srikar Appalaraju, Vijay Mahadevan, Stefano Soatto

While bounding general memorization can have detrimental effects on the performance of a trained model, bounding RM does not prevent effective learning.

Memorization Question Answering

Adaptive Private-K-Selection with Adaptive K and Application to Multi-label PATE

no code implementations30 Mar 2022 Yuqing Zhu, Yu-Xiang Wang

We provide an end-to-end Renyi DP based-framework for differentially private top-$k$ selection.

Multi-Label Classification

Mixed Differential Privacy in Computer Vision

no code implementations CVPR 2022 Aditya Golatkar, Alessandro Achille, Yu-Xiang Wang, Aaron Roth, Michael Kearns, Stefano Soatto

AdaMix incorporates few-shot training, or cross-modal zero-shot learning, on public data prior to private fine-tuning, to improve the trade-off.

Zero-Shot Learning

Near-optimal Offline Reinforcement Learning with Linear Representation: Leveraging Variance Information with Pessimism

no code implementations11 Mar 2022 Ming Yin, Yaqi Duan, Mengdi Wang, Yu-Xiang Wang

However, a precise understanding of the statistical limits with function representations, remains elusive, even when such a representation is linear.

Decision Making reinforcement-learning +1

Sample-Efficient Reinforcement Learning with loglog(T) Switching Cost

no code implementations13 Feb 2022 Dan Qiao, Ming Yin, Ming Min, Yu-Xiang Wang

In this paper, we propose a new algorithm based on stage-wise exploration and adaptive policy elimination that achieves a regret of $\widetilde{O}(\sqrt{H^4S^2AT})$ while requiring a switching cost of $O(HSA \log\log T)$.

reinforcement-learning reinforcement Learning

Towards Agnostic Feature-based Dynamic Pricing: Linear Policies vs Linear Valuation with Unknown Noise

no code implementations27 Jan 2022 Jianyu Xu, Yu-Xiang Wang

In feature-based dynamic pricing, a seller sets appropriate prices for a sequence of products (described by feature vectors) on the fly by learning from the binary outcomes of previous sales sessions ("Sold" if valuation $\geq$ price, and "Not Sold" otherwise).

Optimal Dynamic Regret in Proper Online Learning with Strongly Convex Losses and Beyond

no code implementations21 Jan 2022 Dheeraj Baby, Yu-Xiang Wang

We study the framework of universal dynamic regret minimization with strongly convex losses.

Multivariate Trend Filtering for Lattice Data

no code implementations29 Dec 2021 Veeranjaneyulu Sadhanala, Yu-Xiang Wang, Addison J. Hu, Ryan J. Tibshirani

We study a multivariate version of trend filtering, called Kronecker trend filtering or KTF, for the case in which the design points form a lattice in $d$ dimensions.

Privately Publishable Per-instance Privacy

no code implementations NeurIPS 2021 Rachel Redberg, Yu-Xiang Wang

We consider how to privately share the personalized privacy losses incurred by objective perturbation, using per-instance differential privacy (pDP).

Towards Instance-Optimal Offline Reinforcement Learning with Pessimism

no code implementations NeurIPS 2021 Ming Yin, Yu-Xiang Wang

We study the offline reinforcement learning (offline RL) problem, where the goal is to learn a reward-maximizing policy in an unknown Markov Decision Process (MDP) using the data coming from a policy $\mu$.

Offline RL reinforcement-learning +1

SeqPATE: Differentially Private Text Generation via Knowledge Distillation

no code implementations29 Sep 2021 Zhiliang Tian, Yingxiu Zhao, Ziyue Huang, Yu-Xiang Wang, Nevin Zhang, He He

Differentially private (DP) learning algorithms provide guarantees on identifying the existence of a training sample from model outputs.

Knowledge Distillation Sentence Completion +1

Smoothed Differential Privacy

no code implementations4 Jul 2021 Ao Liu, Yu-Xiang Wang, Lirong Xia

Differential privacy (DP) is a widely-accepted and widely-applied notion of privacy based on worst-case analysis.

Optimal Accounting of Differential Privacy via Characteristic Function

1 code implementation16 Jun 2021 Yuqing Zhu, Jinshuo Dong, Yu-Xiang Wang

Characterizing the privacy degradation over compositions, i. e., privacy accounting, is a fundamental topic in differential privacy (DP) with many applications to differentially private machine learning and federated learning.

Federated Learning

Optimal Uniform OPE and Model-based Offline Reinforcement Learning in Time-Homogeneous, Reward-Free and Task-Agnostic Settings

no code implementations NeurIPS 2021 Ming Yin, Yu-Xiang Wang

This work studies the statistical limits of uniform convergence for offline policy evaluation (OPE) problems with model-based methods (for episodic MDP) and provides a unified framework towards optimal learning for several well-motivated offline tasks.

Offline RL

Optimal Dynamic Regret in Exp-Concave Online Learning

no code implementations23 Apr 2021 Dheeraj Baby, Yu-Xiang Wang

We consider the problem of the Zinkevich (2003)-style dynamic regret minimization in online learning with exp-concave losses.

Logarithmic Regret in Feature-based Dynamic Pricing

no code implementations NeurIPS 2021 Jianyu Xu, Yu-Xiang Wang

Feature-based dynamic pricing is an increasingly popular model of setting prices for highly differentiated products with applications in digital marketing, online sales, real estate and so on.

Marketing

Non-stationary Online Learning with Memory and Non-stochastic Control

no code implementations7 Feb 2021 Peng Zhao, Yu-Hu Yan, Yu-Xiang Wang, Zhi-Hua Zhou

We study the problem of Online Convex Optimization (OCO) with memory, which allows loss functions to depend on past decisions and thus captures temporal effects of learning problems.

Near-Optimal Offline Reinforcement Learning via Double Variance Reduction

no code implementations NeurIPS 2021 Ming Yin, Yu Bai, Yu-Xiang Wang

Our main result shows that OPDVR provably identifies an $\epsilon$-optimal policy with $\widetilde{O}(H^2/d_m\epsilon^2)$ episodes of offline data in the finite-horizon stationary transition setting, where $H$ is the horizon length and $d_m$ is the minimal marginal state-action distribution induced by the behavior policy.

Offline RL reinforcement-learning +1

An Optimal Reduction of TV-Denoising to Adaptive Online Learning

no code implementations23 Jan 2021 Dheeraj Baby, Xuandong Zhao, Yu-Xiang Wang

We consider the problem of estimating a function from $n$ noisy samples whose discrete Total Variation (TV) is bounded by $C_n$.

Denoising Time Series

Improving Sparse Vector Technique with Renyi Differential Privacy

no code implementations NeurIPS 2020 Yuqing Zhu, Yu-Xiang Wang

The Sparse Vector Technique (SVT) is one of the most fundamental algorithmic tools in differential privacy (DP).

Revisiting Model-Agnostic Private Learning: Faster Rates and Active Learning

no code implementations6 Nov 2020 Chong Liu, Yuqing Zhu, Kamalika Chaudhuri, Yu-Xiang Wang

The Private Aggregation of Teacher Ensembles (PATE) framework is one of the most promising recent approaches in differentially private learning.

Active Learning Majority Voting Classifier

Inter-Series Attention Model for COVID-19 Forecasting

1 code implementation25 Oct 2020 Xiaoyong Jin, Yu-Xiang Wang, Xifeng Yan

COVID-19 pandemic has an unprecedented impact all over the world since early 2020.

Time Series

Adaptive Online Estimation of Piecewise Polynomial Trends

no code implementations NeurIPS 2020 Dheeraj Baby, Yu-Xiang Wang

We consider the framework of non-stationary stochastic optimization [Besbes et al, 2015] with squared error losses and noisy gradient feedback where the dynamic regret of an online learner against a time varying comparator sequence is studied.

regression Stochastic Optimization

Near-Optimal Provable Uniform Convergence in Offline Policy Evaluation for Reinforcement Learning

no code implementations7 Jul 2020 Ming Yin, Yu Bai, Yu-Xiang Wang

The problem of Offline Policy Evaluation (OPE) in Reinforcement Learning (RL) is a critical step towards applying RL in real-life applications.

Offline RL reinforcement-learning +1

Bullseye Polytope: A Scalable Clean-Label Poisoning Attack with Improved Transferability

1 code implementation1 May 2020 Hojjat Aghakhani, Dongyu Meng, Yu-Xiang Wang, Christopher Kruegel, Giovanni Vigna

Our attack, Bullseye Polytope, improves the attack success rate of the current state-of-the-art by 26. 75% in end-to-end transfer learning, while increasing attack speed by a factor of 12.

Transfer Learning

Asymptotically Efficient Off-Policy Evaluation for Tabular Reinforcement Learning

no code implementations29 Jan 2020 Ming Yin, Yu-Xiang Wang

We consider the problem of off-policy evaluation for reinforcement learning, where the goal is to estimate the expected reward of a target policy $\pi$ using offline data collected by running a logging policy $\mu$.

Off-policy evaluation reinforcement-learning

Semantic Guided and Response Times Bounded Top-k Similarity Search over Knowledge Graphs

2 code implementations15 Oct 2019 Yu-Xiang Wang, Arijit Khan, Tianxing Wu, Jiahui Jin, Haijiang Yan

We face two challenges on graph query over a knowledge graph: (1) the structural gap between $G_Q$ and the predefined schema in $G$ causes mismatch with query graph, (2) users cannot view the answers until the graph query terminates, leading to a longer system response time (SRT).

Databases

Enhancing the Locality and Breaking the Memory Bottleneck of Transformer on Time Series Forecasting

2 code implementations NeurIPS 2019 Shiyang Li, Xiaoyong Jin, Yao Xuan, Xiyou Zhou, Wenhu Chen, Yu-Xiang Wang, Xifeng Yan

Time series forecasting is an important problem across many domains, including predictions of solar plant energy output, electricity consumption, and traffic jam situation.

Ranked #27 on Image Generation on ImageNet 64x64 (Bits per dim metric)

Time Series Forecasting

Online Forecasting of Total-Variation-bounded Sequences

1 code implementation NeurIPS 2019 Dheeraj Baby, Yu-Xiang Wang

We design an $O(n\log n)$-time algorithm that achieves a cumulative square error of $\tilde{O}(n^{1/3}C_n^{2/3}\sigma^{4/3} + C_n^2)$ with high probability. We also prove a lower bound that matches the upper bound in all parameters (up to a $\log(n)$ factor).

Stochastic Optimization

Doubly Robust Crowdsourcing

no code implementations8 Jun 2019 Chong Liu, Yu-Xiang Wang

Large-scale labeled dataset is the indispensable fuel that ignites the AI revolution as we see today.

Towards Optimal Off-Policy Evaluation for Reinforcement Learning with Marginalized Importance Sampling

no code implementations NeurIPS 2019 Tengyang Xie, Yifei Ma, Yu-Xiang Wang

To solve this problem, we consider a marginalized importance sampling (MIS) estimator that recursively estimates the state marginal distribution for the target policy at every step.

Off-policy evaluation reinforcement-learning

Provably Efficient Q-Learning with Low Switching Cost

no code implementations NeurIPS 2019 Yu Bai, Tengyang Xie, Nan Jiang, Yu-Xiang Wang

We take initial steps in studying PAC-MDP algorithms with limited adaptivity, that is, algorithms that change its exploration policy as infrequently as possible during regret minimization.

Q-Learning

A Higher-Order Kolmogorov-Smirnov Test

no code implementations24 Mar 2019 Veeranjaneyulu Sadhanala, Yu-Xiang Wang, Aaditya Ramdas, Ryan J. Tibshirani

We present an extension of the Kolmogorov-Smirnov (KS) two-sample test, which can be more sensitive to differences in the tails.

Imitation-Regularized Offline Learning

no code implementations15 Jan 2019 Yifei Ma, Yu-Xiang Wang, Balakrishnan, Narayanaswamy

To solve both problems, we show how one can use policy improvement (PIL) objectives, regularized by policy imitation (IML).

Multi-Armed Bandits

ProxQuant: Quantized Neural Networks via Proximal Operators

1 code implementation ICLR 2019 Yu Bai, Yu-Xiang Wang, Edo Liberty

To make deep neural networks feasible in resource-constrained environments (such as mobile devices), it is beneficial to quantize models by using low-precision weights.

Quantization

Subsampled Rényi Differential Privacy and Analytical Moments Accountant

1 code implementation31 Jul 2018 Yu-Xiang Wang, Borja Balle, Shiva Kasiviswanathan

We study the problem of subsampling in differential privacy (DP), a question that is the centerpiece behind many successful differentially private machine learning algorithms.

BIG-bench Machine Learning

Patch-Based Image Hallucination for Super Resolution with Detail Reconstruction from Similar Sample Images

no code implementations3 Jun 2018 Chieh-Chi Kao, Yu-Xiang Wang, Jonathan Waltman, Pradeep Sen

Image hallucination and super-resolution have been studied for decades, and many approaches have been proposed to upsample low-resolution images using information from the images themselves, multiple example images, or large image databases.

Super-Resolution

An end-to-end Differentially Private Latent Dirichlet Allocation Using a Spectral Algorithm

no code implementations ICML 2020 Christopher DeCarolis, Mukul Ram, Seyed A. Esmaeili, Yu-Xiang Wang, Furong Huang

Overall, by combining the sensitivity and utility characterization, we obtain an end-to-end differentially private spectral algorithm for LDA and identify the corresponding configuration that outperforms others in any specific regime.

Variational Inference

Improving the Gaussian Mechanism for Differential Privacy: Analytical Calibration and Optimal Denoising

1 code implementation ICML 2018 Borja Balle, Yu-Xiang Wang

The Gaussian mechanism is an essential building block used in multitude of differentially private data analysis algorithms.

Denoising

signSGD: Compressed Optimisation for Non-Convex Problems

3 code implementations ICML 2018 Jeremy Bernstein, Yu-Xiang Wang, Kamyar Azizzadenesheli, Anima Anandkumar

Using a theorem by Gauss we prove that majority vote can achieve the same reduction in variance as full precision distributed SGD.

Detecting and Correcting for Label Shift with Black Box Predictors

1 code implementation ICML 2018 Zachary C. Lipton, Yu-Xiang Wang, Alex Smola

Faced with distribution shift between training and test set, we wish to detect and quantify the shift, and to correct our classifiers without test set labels.

Medical Diagnosis

Higher-Order Total Variation Classes on Grids: Minimax Theory and Trend Filtering Methods

no code implementations NeurIPS 2017 Veeranjaneyulu Sadhanala, Yu-Xiang Wang, James L. Sharpnack, Ryan J. Tibshirani

To move past this, we define two new higher-order TV classes, based on two ways of compiling the discrete derivatives of a parameter across the nodes.

Adaptive Measurement Network for CS Image Reconstruction

1 code implementation23 Sep 2017 Xuemei Xie, Yu-Xiang Wang, Guangming Shi, Chenye Wang, Jiang Du, Zhifu Zhao

In this paper, we propose an adaptive measurement network in which measurement is obtained by learning.

Compressive Sensing Image Reconstruction

Non-stationary Stochastic Optimization under $L_{p,q}$-Variation Measures

no code implementations9 Aug 2017 Xi Chen, Yining Wang, Yu-Xiang Wang

We consider a non-stationary sequential stochastic optimization problem, in which the underlying cost functions change over time under a variation budget constraint.

Stochastic Optimization

Per-instance Differential Privacy

no code implementations24 Jul 2017 Yu-Xiang Wang

We consider a refinement of differential privacy --- per instance differential privacy (pDP), which captures the privacy of a specific individual with respect to a fixed data set.

Optimal and Adaptive Off-policy Evaluation in Contextual Bandits

2 code implementations ICML 2017 Yu-Xiang Wang, Alekh Agarwal, Miroslav Dudik

We study the off-policy evaluation problem---estimating the value of a target policy using data collected by another policy---under the contextual bandit model.

Multi-Armed Bandits Off-policy evaluation

Attributing Hacks

1 code implementation7 Nov 2016 Ziqi Liu, Alexander J. Smola, Kyle Soska, Yu-Xiang Wang, Qinghua Zheng, Jun Zhou

That is, given properties of sites and the temporal occurrence of attacks, we are able to attribute individual attacks to joint causes and vulnerabilities, as well as estimating the evolution of these vulnerabilities over time.

A Theoretical Analysis of Noisy Sparse Subspace Clustering on Dimensionality-Reduced Data

no code implementations24 Oct 2016 Yining Wang, Yu-Xiang Wang, Aarti Singh

Subspace clustering is the problem of partitioning unlabeled data points into a number of clusters so that data points within one cluster lie approximately on a low-dimensional linear subspace.

Dimensionality Reduction

On-Average KL-Privacy and its equivalence to Generalization for Max-Entropy Mechanisms

no code implementations8 May 2016 Yu-Xiang Wang, Jing Lei, Stephen E. Fienberg

We define On-Average KL-Privacy and present its properties and connections to differential privacy, generalization and information-theoretic quantities including max-information and mutual information.

A Minimax Theory for Adaptive Data Analysis

no code implementations13 Feb 2016 Yu-Xiang Wang, Jing Lei, Stephen E. Fienberg

In this paper, we propose a minimax framework for adaptive data analysis.

Differentially private subspace clustering

no code implementations NeurIPS 2015 Yining Wang, Yu-Xiang Wang, Aarti Singh

Subspace clustering is an unsupervised learning problem that aims at grouping data points into multiple ``clusters'' so that data points in a single cluster lie approximately on a low-dimensional linear subspace.

Motion Segmentation

Fast Differentially Private Matrix Factorization

no code implementations6 May 2015 Ziqi Liu, Yu-Xiang Wang, Alexander J. Smola

Differentially private collaborative filtering is a challenging task, both in terms of accuracy and speed.

Collaborative Filtering

Graph Connectivity in Noisy Sparse Subspace Clustering

no code implementations4 Apr 2015 Yining Wang, Yu-Xiang Wang, Aarti Singh

A line of recent work (4, 19, 24, 20) provided strong theoretical guarantee for sparse subspace clustering (4), the state-of-the-art algorithm for subspace clustering, on both noiseless and noisy data sets.

Privacy for Free: Posterior Sampling and Stochastic Gradient Monte Carlo

no code implementations26 Feb 2015 Yu-Xiang Wang, Stephen E. Fienberg, Alex Smola

We consider the problem of Bayesian learning on sensitive datasets and present two simple but somewhat surprising results that connect Bayesian learning to "differential privacy:, a cryptographic approach to protect individual-level privacy while permiting database-level utility.

Learning with Differential Privacy: Stability, Learnability and the Sufficiency and Necessity of ERM Principle

no code implementations23 Feb 2015 Yu-Xiang Wang, Jing Lei, Stephen E. Fienberg

Lastly, we extend some of the results to the more practical $(\epsilon,\delta)$-differential privacy and establish the existence of a phase-transition on the class of problems that are approximately privately learnable with respect to how small $\delta$ needs to be.

Trend Filtering on Graphs

no code implementations28 Oct 2014 Yu-Xiang Wang, James Sharpnack, Alex Smola, Ryan J. Tibshirani

We introduce a family of adaptive estimators on graphs, based on penalizing the $\ell_1$ norm of discrete graph differences.

regression

Parallel and Distributed Block-Coordinate Frank-Wolfe Algorithms

no code implementations22 Sep 2014 Yu-Xiang Wang, Veeranjaneyulu Sadhanala, Wei Dai, Willie Neiswanger, Suvrit Sra, Eric P. Xing

We develop parallel and distributed Frank-Wolfe algorithms; the former on shared memory machines with mini-batching, and the latter in a delayed update framework.

The Falling Factorial Basis and Its Statistical Applications

no code implementations3 May 2014 Yu-Xiang Wang, Alex Smola, Ryan J. Tibshirani

We study a novel spline-like basis, which we name the "falling factorial basis", bearing many similarities to the classic truncated power basis.

Provable Subspace Clustering: When LRR meets SSC

no code implementations NeurIPS 2013 Yu-Xiang Wang, Huan Xu, Chenlei Leng

Sparse Subspace Clustering (SSC) and Low-Rank Representation (LRR) are both considered as the state-of-the-art methods for {\em subspace clustering}.

Noisy Sparse Subspace Clustering

no code implementations5 Sep 2013 Yu-Xiang Wang, Huan Xu

This paper considers the problem of subspace clustering under noise.

Cannot find the paper you are looking for? You can Submit a new open access paper.