2 code implementations • 13 Feb 2020 • Yang Liu, Tiexing Wang, Yuexin Jiang, Biao Chen
With presence detection, how to collect training data with human presence can have a significant impact on the performance.
1 code implementation • 29 Jan 2020 • Yiqiang Chen, Xiaodong Yang, Xin Qin, Han Yu, Biao Chen, Zhiqi Shen
It maintains a small set of benchmark samples on the FL server and quantifies the credibility of the client local data without directly observing them by computing the mutual cross-entropy between performance of the FL model on the local datasets and that of the client local FL model on the benchmark dataset.
no code implementations • 27 Aug 2019 • Shengyu Zhu, Biao Chen, Zhitang Chen, Pengfei Yang
With Sanov's theorem, we derive a sufficient condition for one-sample tests to achieve the optimal error exponent in the universal setting, i. e., for any distribution defining the alternative hypothesis.
no code implementations • 31 Jul 2018 • Tiexing Wang, Qunwei Li, Donald J. Bucci, Yingbin Liang, Biao Chen, Pramod K. Varshney
In particular, the error exponent is characterized when either the Kolmogrov-Smirnov distance or the maximum mean discrepancy are used as the distance metric.
no code implementations • 23 Feb 2018 • Shengyu Zhu, Biao Chen, Zhitang Chen
Given two sets of independent samples from unknown distributions $P$ and $Q$, a two-sample test decides whether to reject the null hypothesis that $P=Q$.
no code implementations • 21 Feb 2018 • Shengyu Zhu, Biao Chen, Pengfei Yang, Zhitang Chen
We show that two classes of Maximum Mean Discrepancy (MMD) based tests attain this optimality on $\mathbb R^d$, while the quadratic-time Kernel Stein Discrepancy (KSD) based tests achieve the maximum exponential decay rate under a relaxed level constraint.