Search Results for author: Xun Qian

Found 10 papers, 0 papers with code

Acceleration for Compressed Gradient Descent in Distributed Optimization

no code implementations ICML 2020 Zhize Li, Dmitry Kovalev, Xun Qian, Peter Richtarik

Due to the high communication cost in distributed and federated learning problems, methods relying on sparsification or quantization of communicated messages are becoming increasingly popular.

Distributed Optimization Federated Learning +1

Communication-Efficient Distributed Learning with Local Immediate Error Compensation

no code implementations19 Feb 2024 Yifei Cheng, Li Shen, Linli Xu, Xun Qian, Shiwei Wu, Yiming Zhou, Tie Zhang, DaCheng Tao, Enhong Chen

However, existing compression methods either perform only unidirectional compression in one iteration with higher communication cost, or bidirectional compression with slower convergence rate.

InstructPipe: Building Visual Programming Pipelines with Human Instructions

no code implementations15 Dec 2023 Zhongyi Zhou, Jing Jin, Vrushank Phadnis, Xiuxiu Yuan, Jun Jiang, Xun Qian, Jingtao Zhou, Yiyi Huang, Zheng Xu, yinda zhang, Kristen Wright, Jason Mayes, Mark Sherwood, Johnny Lee, Alex Olwal, David Kim, Ram Iyengar, Na Li, Ruofei Du

Our user study (N=16) showed that InstructPipe empowers novice users to streamline their workflow in creating desired ML pipelines, reduce their learning curve, and spark innovative ideas with open-ended commands.

XAIR: A Framework of Explainable AI in Augmented Reality

no code implementations28 Mar 2023 Xuhai Xu, Mengjie Yu, Tanya R. Jonker, Kashyap Todi, Feiyu Lu, Xun Qian, João Marcelo Evangelista Belo, Tianyi Wang, Michelle Li, Aran Mun, Te-Yen Wu, Junxiao Shen, Ting Zhang, Narine Kokhlikyan, Fulton Wang, Paul Sorenson, Sophie Kahyun Kim, Hrvoje Benko

The framework was based on a multi-disciplinary literature review of XAI and HCI research, a large-scale survey probing 500+ end-users' preferences for AR-based explanations, and three workshops with 12 experts collecting their insights about XAI design in AR.

Explainable Artificial Intelligence (XAI)

Distributed Newton-Type Methods with Communication Compression and Bernoulli Aggregation

no code implementations7 Jun 2022 Rustem Islamov, Xun Qian, Slavomír Hanzely, Mher Safaryan, Peter Richtárik

Despite their high computation and communication costs, Newton-type methods remain an appealing option for distributed training due to their robustness against ill-conditioned convex problems.

Federated Learning Vocal Bursts Type Prediction

Basis Matters: Better Communication-Efficient Second Order Methods for Federated Learning

no code implementations2 Nov 2021 Xun Qian, Rustem Islamov, Mher Safaryan, Peter Richtárik

Recent advances in distributed optimization have shown that Newton-type methods with proper communication compression mechanisms can guarantee fast local rates and low communication cost compared to first order methods.

Distributed Optimization Federated Learning +1

FedNL: Making Newton-Type Methods Applicable to Federated Learning

no code implementations5 Jun 2021 Mher Safaryan, Rustem Islamov, Xun Qian, Peter Richtárik

In contrast to the aforementioned work, FedNL employs a different Hessian learning technique which i) enhances privacy as it does not rely on the training data to be revealed to the coordinating server, ii) makes it applicable beyond generalized linear models, and iii) provably works with general contractive compression operators for compressing the local Hessians, such as Top-$K$ or Rank-$R$, which are vastly superior in practice.

Federated Learning Model Compression +2

Distributed Second Order Methods with Fast Rates and Compressed Communication

no code implementations14 Feb 2021 Rustem Islamov, Xun Qian, Peter Richtárik

Finally, we develop a globalization strategy using cubic regularization which leads to our next method, CUBIC-NEWTON-LEARN, for which we prove global sublinear and linear convergence rates, and a fast superlinear rate.

Distributed Optimization Second-order methods

Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization

no code implementations26 Feb 2020 Zhize Li, Dmitry Kovalev, Xun Qian, Peter Richtárik

Due to the high communication cost in distributed and federated learning problems, methods relying on compression of communicated messages are becoming increasingly popular.

Federated Learning

SGD: General Analysis and Improved Rates

no code implementations27 Jan 2019 Robert Mansel Gower, Nicolas Loizou, Xun Qian, Alibek Sailanbayev, Egor Shulgin, Peter Richtarik

By specializing our theorem to different mini-batching strategies, such as sampling with replacement and independent sampling, we derive exact expressions for the stepsize as a function of the mini-batch size.

Cannot find the paper you are looking for? You can Submit a new open access paper.