Search Results for author: Dana Yang

Found 7 papers, 0 papers with code

The planted matching problem: Sharp threshold and infinite-order phase transition

no code implementations17 Mar 2021 Jian Ding, Yihong Wu, Jiaming Xu, Dana Yang

Conversely, if $\sqrt{d} B(\mathcal{P},\mathcal{Q}) \ge 1+\epsilon$ for an arbitrarily small constant $\epsilon>0$, the reconstruction error for any estimator is shown to be bounded away from $0$ under both the sparse and dense model, resolving the conjecture in [Moharrami et al. 2019, Semerjian et al. 2020].

Learner-Private Convex Optimization

no code implementations23 Feb 2021 Jiaming Xu, Kuang Xu, Dana Yang

Convex optimization with feedback is a framework where a learner relies on iterative queries and feedback to arrive at the minimizer of a convex function.

Consistent recovery threshold of hidden nearest neighbor graphs

no code implementations18 Nov 2019 Jian Ding, Yihong Wu, Jiaming Xu, Dana Yang

Motivated by applications such as discovering strong ties in social networks and assembling genome subsequences in biology, we study the problem of recovering a hidden $2k$-nearest neighbor (NN) graph in an $n$-vertex complete graph, whose edge weights are independent and distributed according to $P_n$ for edges in the hidden $2k$-NN graph and $Q_n$ otherwise.

Optimal query complexity for private sequential learning against eavesdropping

no code implementations21 Sep 2019 Jiaming Xu, Kuang Xu, Dana Yang

We study the query complexity of a learner-private sequential learning problem, motivated by the privacy and security concerns due to eavesdropping that arise in practical applications such as pricing and Federated Learning.

Federated Learning

Fair quantile regression

no code implementations19 Jul 2019 Dana Yang, John Lafferty, David Pollard

Quantile regression is a tool for learning conditional distributions.

Fairness

The cost-free nature of optimally tuning Tikhonov regularizers and other ordered smoothers

no code implementations ICML 2020 Pierre C. Bellec, Dana Yang

Our theory reveals that if the Tikhonov regularizers share the same penalty matrix with different tuning parameters, a convex procedure based on $Q$-aggregation achieves the mean square error of the best estimator, up to a small error term no larger than $C\sigma^2$, where $\sigma^2$ is the noise level and $C>0$ is an absolute constant.

Estimating the Coefficients of a Mixture of Two Linear Regressions by Expectation Maximization

no code implementations26 Apr 2017 Jason M. Klusowski, Dana Yang, W. D. Brinda

We also show that the population EM operator for mixtures of two regressions is anti-contractive from the target parameter vector if the cosine angle between the input vector and the target parameter vector is too small, thereby establishing the necessity of our conic condition.

Cannot find the paper you are looking for? You can Submit a new open access paper.