no code implementations • 26 Apr 2017 • Jason M. Klusowski, Dana Yang, W. D. Brinda
We also show that the population EM operator for mixtures of two regressions is anti-contractive from the target parameter vector if the cosine angle between the input vector and the target parameter vector is too small, thereby establishing the necessity of our conic condition.
no code implementations • ICML 2020 • Pierre C. Bellec, Dana Yang
Our theory reveals that if the Tikhonov regularizers share the same penalty matrix with different tuning parameters, a convex procedure based on $Q$-aggregation achieves the mean square error of the best estimator, up to a small error term no larger than $C\sigma^2$, where $\sigma^2$ is the noise level and $C>0$ is an absolute constant.
no code implementations • 19 Jul 2019 • Dana Yang, John Lafferty, David Pollard
Quantile regression is a tool for learning conditional distributions.
no code implementations • 21 Sep 2019 • Jiaming Xu, Kuang Xu, Dana Yang
We study the query complexity of a learner-private sequential learning problem, motivated by the privacy and security concerns due to eavesdropping that arise in practical applications such as pricing and Federated Learning.
no code implementations • 18 Nov 2019 • Jian Ding, Yihong Wu, Jiaming Xu, Dana Yang
Motivated by applications such as discovering strong ties in social networks and assembling genome subsequences in biology, we study the problem of recovering a hidden $2k$-nearest neighbor (NN) graph in an $n$-vertex complete graph, whose edge weights are independent and distributed according to $P_n$ for edges in the hidden $2k$-NN graph and $Q_n$ otherwise.
no code implementations • 23 Feb 2021 • Jiaming Xu, Kuang Xu, Dana Yang
Convex optimization with feedback is a framework where a learner relies on iterative queries and feedback to arrive at the minimizer of a convex function.
no code implementations • 17 Mar 2021 • Jian Ding, Yihong Wu, Jiaming Xu, Dana Yang
Conversely, if $\sqrt{d} B(\mathcal{P},\mathcal{Q}) \ge 1+\epsilon$ for an arbitrarily small constant $\epsilon>0$, the reconstruction error for any estimator is shown to be bounded away from $0$ under both the sparse and dense model, resolving the conjecture in [Moharrami et al. 2019, Semerjian et al. 2020].
no code implementations • 21 Dec 2022 • Cynthia Rush, Fiona Skerman, Alexander S. Wein, Dana Yang
In particular, we consider certain hypothesis testing problems between models with different community structures, and we show (in the low-degree polynomial framework) that testing between two options is as hard as finding the communities.