no code implementations • 18 Oct 2023 • Caelin G. Kaplan, Chuan Xu, Othmane Marfoq, Giovanni Neglia, Anderson Santana de Oliveira
Within the realm of privacy-preserving machine learning, empirical privacy defenses have been proposed as a solution to achieve satisfactory levels of training data privacy without a significant drop in model utility.
no code implementations • 23 Mar 2023 • Hanyao Huang, Ou Zheng, Dongdong Wang, Jiayi Yin, Zijin Wang, Shengxuan Ding, Heng Yin, Chuan Xu, Renjie Yang, Qian Zheng, Bing Shi
Overall, LLMs have the potential to revolutionize dental diagnosis and treatment, which indicates a promising avenue for clinical application and research in dentistry.
no code implementations • 21 Nov 2022 • Lana X. Garmire, Yijun Li, Qianhui Huang, Chuan Xu, Sarah Teichmann, Naftali Kaminski, Matteo Pellegrini, Quan Nguyen, Andrew E. Teschendorff
Deciphering cell type heterogeneity is crucial for systematically understanding tissue homeostasis and its dysregulation in diseases.
no code implementations • 28 Oct 2022 • Ilias Driouich, Chuan Xu, Giovanni Neglia, Frederic Giroire, Eoin Thomas
Additionally, we propose a novel model-based attribute inference attack in federated learning leveraging the local model reconstruction attack.
1 code implementation • 31 Oct 2021 • Oualid Zari, Chuan Xu, Giovanni Neglia
In cross-device federated learning (FL) setting, clients such as mobiles cooperate with the server to train a global machine learning model, while maintaining their data locally.
no code implementations • 28 Jun 2021 • David Osumi-Sutherland, Chuan Xu, Maria Keays, Peter V. Kharchenko, Aviv Regev, Ed Lein, Sarah A. Teichmann
Massive single-cell profiling efforts have accelerated our discovery of the cellular composition of the human body, while at the same time raising the need to formalise this new knowledge.
1 code implementation • NeurIPS 2020 • Othmane Marfoq, Chuan Xu, Giovanni Neglia, Richard Vidal
Federated learning usually employs a client-server architecture where an orchestrator iteratively aggregates model updates from remote clients and pushes them back a refined model.
no code implementations • 30 Apr 2020 • Chuan Xu, Giovanni Neglia, Nicola Sebastianelli
This paradigm consists of $n$ workers, which iteratively compute updates of the model parameters, and a stateful PS, which waits and aggregates all updates to generate a new estimate of model parameters and sends it back to the workers for a new iteration.
no code implementations • 28 Feb 2020 • Giovanni Neglia, Chuan Xu, Don Towsley, Gianmarco Calbi
Consensus-based distributed optimization methods have recently been advocated as alternatives to parameter server and ring all-reduce paradigms for large scale training of machine learning models.