no code implementations • 19 Jun 2024 • Lili Zheng, Andersen Chang, Genevera I. Allen
Patchwork learning arises as a new and challenging data collection paradigm where both samples and features are observed in fragmented subsets.
no code implementations • 2 Aug 2023 • Genevera I. Allen, Luqin Gan, Lili Zheng
In this paper, we discuss and review the field of interpretable machine learning, focusing especially on the techniques as they are often employed to generate new knowledge or make discoveries from large data sets.
no code implementations • 22 May 2023 • Andersen Chang, Lili Zheng, Gautam Dasarthy, Genevera I. Allen
Probabilistic graphical models have become an important unsupervised learning tool for detecting network structures for a variety of problems, including the estimation of functional neuronal connectivity from two-photon calcium imaging data.
no code implementations • 18 Nov 2022 • Huawei Hou, Suzhi Bi, Lili Zheng, Xiaohui Lin, Yuan Wu, Zhi Quan
In this paper, we propose a Domain-Agnostic and Sample-Efficient wireless indoor crowd Counting (DASECount) framework that suffices to attain robust cross-domain detection accuracy given very limited data samples in new domains.
no code implementations • 17 Sep 2022 • Andersen Chang, Lili Zheng, Genevera I. Allen
This leads to the Graph Quilting problem, as first introduced by (Vinci et. al.
no code implementations • 5 Jun 2022 • Luqin Gan, Lili Zheng, Genevera I. Allen
Our approach is fast as we avoid model refitting by leveraging a form of random observation and feature subsampling called minipatch ensembles; this approach also improves statistical power by avoiding data splitting.
no code implementations • 19 Nov 2021 • Hao Chen, Lili Zheng, Raed Al Kontar, Garvesh Raskutti
Stochastic gradient descent (SGD) and its variants have established themselves as the go-to algorithms for large-scale machine learning problems with independent samples due to their generalization performance and intrinsic computational advantage.
no code implementations • NeurIPS 2020 • Hao Chen, Lili Zheng, Raed Al Kontar, Garvesh Raskutti
Stochastic gradient descent (SGD) and its variants have established themselves as the go-to algorithms for large-scale machine learning problems with independent samples due to their generalization performance and intrinsic computational advantage.
1 code implementation • 6 Oct 2020 • Yuchen Zhou, Anru R. Zhang, Lili Zheng, Yazhen Wang
This paper studies a general framework for high-order tensor SVD.
no code implementations • 16 Mar 2020 • Lili Zheng, Garvesh Raskutti, Rebecca Willett, Benjamin Mark
High-dimensional autoregressive point processes model how current events trigger or inhibit future events, such as activity by one member of a social network can affect the future activity of his or her neighbors.