1 code implementation • 2 Apr 2024 • Sai Li, Linjun Zhang
Machine learning methods often assume that the test data have the same distribution as the training data.
1 code implementation • 18 Sep 2023 • Sai Li, Linjun Zhang
In conventional statistical and machine learning methods, it is typically assumed that the test data are identically distributed with the training data.
no code implementations • 9 Feb 2022 • Elynn Y. Chen, Michael I. Jordan, Sai Li
We consider $Q$-learning with knowledge transfer, using samples from a target reinforcement learning (RL) task as well as source samples from different but related RL tasks.
2 code implementations • 2 Jan 2022 • Huaxiu Yao, Yu Wang, Sai Li, Linjun Zhang, Weixin Liang, James Zou, Chelsea Finn
Machine learning algorithms typically assume that training and test examples are drawn from the same distribution.
no code implementations • 27 Aug 2021 • Sai Li, Tianxi Cai, Rui Duan
With only a small number of communications across participating sites, the proposed method can achieve performance comparable to the pooled analysis where individual-level data are directly pooled together.
no code implementations • 4 Apr 2021 • Ngan Nguyen, Ciril Bohak, Dominik Engel, Peter Mindek, Ondřej Strnad, Peter Wonka, Sai Li, Timo Ropinski, Ivan Viola
Our technique shows the high impact in target sciences for visual data analysis of very noisy volumes that cannot be visualized with existing techniques.
no code implementations • 16 Dec 2020 • Sai Li, Zheng-Yuan Xue
The key for realizing fault-tolerant quantum computation lies in maintaining the coherence of all qubits so that high-fidelity and robust quantum manipulations on them can be achieved.
Quantum Physics
1 code implementation • 21 Oct 2020 • Sai Li, T. Tony Cai, Hongzhe Li
Transfer learning for high-dimensional Gaussian graphical models (GGMs) is studied with the goal of estimating the target GGM by utilizing the data from similar and related auxiliary studies.
1 code implementation • 18 Jun 2020 • Sai Li, T. Tony Cai, Hongzhe Li
This paper considers the estimation and prediction of a high-dimensional linear regression in the setting of transfer learning, using samples from the target model as well as auxiliary samples from different but possibly related regression models.