1 code implementation • 21 Aug 2019 • Tian Li, Anit Kumar Sahu, Ameet Talwalkar, Virginia Smith
Federated learning involves training statistical models over remote devices or siloed data centers, such as mobile phones or hospitals, while keeping data localized.
4 code implementations • 8 Dec 2020 • Tian Li, Shengyuan Hu, Ahmad Beirami, Virginia Smith
Fairness and robustness are two important concerns for federated learning systems.
7 code implementations • 3 Dec 2018 • Sebastian Caldas, Sai Meher Karthik Duddu, Peter Wu, Tian Li, Jakub Konečný, H. Brendan McMahan, Virginia Smith, Ameet Talwalkar
Modern federated networks, such as those comprised of wearable devices, mobile phones, or autonomous vehicles, generate massive amounts of data each day.
2 code implementations • 14 Jul 2021 • Jianyu Wang, Zachary Charles, Zheng Xu, Gauri Joshi, H. Brendan McMahan, Blaise Aguera y Arcas, Maruan Al-Shedivat, Galen Andrew, Salman Avestimehr, Katharine Daly, Deepesh Data, Suhas Diggavi, Hubert Eichner, Advait Gadhikar, Zachary Garrett, Antonious M. Girgis, Filip Hanzely, Andrew Hard, Chaoyang He, Samuel Horvath, Zhouyuan Huo, Alex Ingerman, Martin Jaggi, Tara Javidi, Peter Kairouz, Satyen Kale, Sai Praneeth Karimireddy, Jakub Konecny, Sanmi Koyejo, Tian Li, Luyang Liu, Mehryar Mohri, Hang Qi, Sashank J. Reddi, Peter Richtarik, Karan Singhal, Virginia Smith, Mahdi Soltanolkotabi, Weikang Song, Ananda Theertha Suresh, Sebastian U. Stich, Ameet Talwalkar, Hongyi Wang, Blake Woodworth, Shanshan Wu, Felix X. Yu, Honglin Yuan, Manzil Zaheer, Mi Zhang, Tong Zhang, Chunxiang Zheng, Chen Zhu, Wennan Zhu
Federated learning and analytics are a distributed approach for collaboratively learning models (or statistics) from decentralized data, motivated by and designed for privacy protection.
2 code implementations • 18 Jun 2022 • Shanshan Wu, Tian Li, Zachary Charles, Yu Xiao, Ziyu Liu, Zheng Xu, Virginia Smith
To better answer these questions, we propose Motley, a benchmark for personalized federated learning.
19 code implementations • 14 Dec 2018 • Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, Virginia Smith
Theoretically, we provide convergence guarantees for our framework when learning over data from non-identical distributions (statistical heterogeneity), and while adhering to device-level systems constraints by allowing each participating device to perform a variable amount of work (systems heterogeneity).
2 code implementations • ICLR 2020 • Tian Li, Maziar Sanjabi, Ahmad Beirami, Virginia Smith
Federated learning involves training statistical models in massive, heterogeneous networks.
2 code implementations • ICLR 2021 • Tian Li, Ahmad Beirami, Maziar Sanjabi, Virginia Smith
Empirical risk minimization (ERM) is typically designed to perform well on the average loss, which can result in estimators that are sensitive to outliers, generalize poorly, or treat subgroups unfairly.
1 code implementation • 13 Sep 2021 • Tian Li, Ahmad Beirami, Maziar Sanjabi, Virginia Smith
Finally, we demonstrate that TERM can be used for a multitude of applications in machine learning, such as enforcing fairness between subgroups, mitigating the effect of outliers, and handling class imbalance.
2 code implementations • 7 Jan 2020 • Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, Virginia Smith
Federated learning aims to jointly learn statistical models over massively distributed remote devices.
3 code implementations • 12 Jun 2020 • Zhen Dong, Dequan Wang, Qijing Huang, Yizhao Gao, Yaohui Cai, Tian Li, Bichen Wu, Kurt Keutzer, John Wawrzynek
Deploying deep learning models on embedded systems has been challenging due to limited computing resources.
1 code implementation • 21 Apr 2022 • Chenfeng Xu, Tian Li, Chen Tang, Lingfeng Sun, Kurt Keutzer, Masayoshi Tomizuka, Alireza Fathi, Wei Zhan
It is hard to replicate these approaches in trajectory forecasting due to the lack of adequate trajectory data (e. g., 34K samples in the nuScenes dataset).
1 code implementation • 30 Oct 2020 • Tian Li, Xiang Chen, Shanghang Zhang, Zhen Dong, Kurt Keutzer
Due to scarcity of labels on the target domain, we introduce mutual information maximization (MIM) apart from CL to exploit the features that best support the final prediction.
1 code implementation • 5 Dec 2020 • Tian Li, Xiang Chen, Shanghang Zhang, Zhen Dong, Kurt Keutzer
In this paper, we propose a contrastive learning framework for cross-domain sentiment classification.
2 code implementations • 1 Mar 2021 • Don Kurian Dennis, Tian Li, Virginia Smith
In this work, we explore the unique challenges -- and opportunities -- of unsupervised federated learning (FL).
1 code implementation • 12 Feb 2022 • Tian Li, Manzil Zaheer, Sashank J. Reddi, Virginia Smith
Adaptive optimization methods have become the default solvers for many machine learning tasks.
1 code implementation • 1 Dec 2022 • Tian Li, Manzil Zaheer, Ken Ziyu Liu, Sashank J. Reddi, H. Brendan McMahan, Virginia Smith
Privacy noise may negate the benefits of using adaptive optimizers in differentially private model training.
1 code implementation • 23 Dec 2021 • Wentao Ning, Reynold Cheng, Jiajun Shen, Nur Al Hasan Haldar, Ben Kao, Xiao Yan, Nan Huo, Wai Kit Lam, Tian Li, Bo Tang
Specifically, we define a vector encoding for meta-paths and design a policy network to extend meta-paths.
1 code implementation • 20 Jun 2022 • Tian Li, Xiang Chen, Zhen Dong, Weijiang Yu, Yijun Yan, Kurt Keutzer, Shanghang Zhang
Then during training, DASK injects pivot-related knowledge graph information into source domain texts.
no code implementations • 24 Aug 2017 • Tian Li, Jie Zhong, Ji Liu, Wentao Wu, Ce Zhang
We ask, as a "service provider" that manages a shared cluster of machines among all our users running machine learning workloads, what is the resource allocation strategy that maximizes the global satisfaction of all our users?
no code implementations • LREC 2012 • Xin Zuo, Tian Li, Pascale Fung
In this paper, we describe an ongoing effort in collecting and annotating a multilingual speech database of natural stress emotion from university students.
no code implementations • 3 Nov 2019 • Tian Li, Zaoxing Liu, Vyas Sekar, Virginia Smith
Many existing works treat these concerns separately.
no code implementations • 5 Nov 2019 • Zaoxing Liu, Tian Li, Virginia Smith, Vyas Sekar
Federated learning methods run training tasks directly on user devices and do not share the raw user data with third parties.
no code implementations • 22 Dec 2020 • Fu Li, Tian Li, Girish S. Agarwal
Such a correlation is the most important characteristic of a two-mode squeezed state.
Optics Quantum Physics
no code implementations • NeurIPS 2021 • Mikhail Khodak, Renbo Tu, Tian Li, Liam Li, Maria-Florina Balcan, Virginia Smith, Ameet Talwalkar
Tuning hyperparameters is a crucial but arduous part of the machine learning pipeline.
no code implementations • ICLR 2022 • Ravikumar Balakrishnan, Tian Li, Tianyi Zhou, Nageen Himayat, Virginia Smith, Jeff Bilmes
In every communication round of federated learning, a random subset of clients communicate their model updates back to the server which then aggregates them all.
no code implementations • 30 May 2022 • Yae Jee Cho, Divyansh Jhunjhunwala, Tian Li, Virginia Smith, Gauri Joshi
We provide convergence guarantees for MaxFL and show that MaxFL achieves a $22$-$40\%$ and $18$-$50\%$ test accuracy improvement for the training clients and unseen clients respectively, compared to a wide range of FL modeling approaches, including those that tackle data heterogeneity, aim to incentivize clients, and learn personalized or fair models.
no code implementations • 21 Nov 2022 • Shaohua Zhi, Yinghui Wang, Haonan Xiao, Ti Bai, Hong Ge, Bing Li, Chenyang Liu, Wen Li, Tian Li, Jing Cai
Four-dimensional magnetic resonance imaging (4D-MRI) is an emerging technique for tumor motion management in image-guided radiation therapy (IGRT).
no code implementations • 22 Apr 2023 • Tian Li, Lu Li, Wei Wang, Zhangchi Feng
Our method simulates the physical imaging process of hazy images using an atmospheric scattering model, and jointly learns the atmospheric scattering model and a clean NeRF model for both image dehazing and novel view synthesis.
no code implementations • 15 Nov 2023 • Peng Tang, Pengkai Zhu, Tian Li, Srikar Appalaraju, Vijay Mahadevan, R. Manmatha
Based on the multi-exit model, we perform step-level dynamic early exit during inference, where the model may decide to use fewer decoder layers based on its confidence of the current layer at each individual decoding step.
no code implementations • 6 Mar 2024 • Ziyue Li, Tian Li, Virginia Smith, Jeff Bilmes, Tianyi Zhou
Optimizing the performance of many objectives (instantiated by tasks or clients) jointly with a few Pareto stationary solutions (models) is critical in machine learning.