no code implementations • 25 Mar 2024 • Yang Chen, Yingwei Pan, Haibo Yang, Ting Yao, Tao Mei
In this work, we introduce a novel Visual Prompt-guided text-to-3D diffusion model (VP3D) that explicitly unleashes the visual appearance knowledge in 2D visual prompt to boost text-to-3D generation.
1 code implementation • 9 Nov 2023 • Haibo Yang, Yang Chen, Yingwei Pan, Ting Yao, Zhineng Chen, Tao Mei
In this work, we propose a new 3DStyle-Diffusion model that triggers fine-grained stylization of 3D meshes with additional controllable appearance and geometric guidance from 2D Diffusion models.
no code implementations • NeurIPS 2023 • Haibo Yang, Zhuqing Liu, Jia Liu, Chaosheng Dong, Michinari Momma
In recent years, multi-objective optimization (MOO) emerges as a foundational problem underpinning many multi-agent multi-task learning applications.
no code implementations • 3 Oct 2022 • Haibo Yang, Peiwen Qiu, Jia Liu
A key assumption in most existing works on FL algorithms' convergence analysis is that the noise in stochastic first-order information has a finite variance.
no code implementations • 2 Oct 2022 • Haibo Yang, Zhuqing Liu, Xin Zhang, Jia Liu
To lower the communication complexity of federated min-max learning, a natural approach is to utilize the idea of infrequent communications (through multiple local updates) same as in conventional federated learning.
1 code implementation • 28 Sep 2022 • Haibo Yang, Shengjie Zhang, Xiaoyang Han, Botao Zhao, Yan Ren, Yaru Sheng, Xiao-Yong Zhang
We evaluate the proposed method on 720 T2-FLAIR brain images with small lesions at different noise levels.
no code implementations • 17 Aug 2022 • Xin Zhang, Minghong Fang, Zhuqing Liu, Haibo Yang, Jia Liu, Zhengyuan Zhu
Moreover, whether or not the linear speedup for convergence is achievable under fully decentralized FL with data heterogeneity remains an open question.
no code implementations • 12 May 2022 • Haibo Yang, Peiwen Qiu, Jia Liu, Aylin Yener
In order to fully utilize this advantage while providing comparable learning performance to conventional federated learning that presumes model aggregation via noiseless channels, we consider the joint design of transmission scaling and the number of local iterations at each round, given the power constraint at each edge device.
no code implementations • ICLR 2022 • Prashant Khanduri, Haibo Yang, Mingyi Hong, Jia Liu, Hoi To Wai, Sijia Liu
We analyze the optimization and the generalization performance of the proposed framework for the $\ell_2$ loss.
no code implementations • 23 Aug 2021 • Haibo Yang, Xin Zhang, Prashant Khanduri, Jia Liu
To satisfy the need for flexible worker participation, we consider a new FL paradigm called "Anarchic Federated Learning" (AFL) in this paper.
no code implementations • NeurIPS 2021 • Prashant Khanduri, Pranay Sharma, Haibo Yang, Mingyi Hong, Jia Liu, Ketan Rajawat, Pramod K. Varshney
Despite extensive research, for a generic non-convex FL problem, it is not clear, how to choose the WNs' and the server's update directions, the minibatch sizes, and the local update frequency, so that the WNs use the minimum number of samples and communication rounds to achieve the desired solution.
no code implementations • 14 Jun 2021 • Haibo Yang, Jia Liu, Elizabeth S. Bentley
This matches the convergence rate of distributed/federated learning without compression, thus achieving high communication efficiency while not sacrificing learning accuracy in FL.
no code implementations • ICLR 2021 • Haibo Yang, Minghong Fang, Jia Liu
Our results also reveal that the local steps in FL could help the convergence and show that the maximum number of local steps can be improved to $T/m$ in full worker participation.
no code implementations • 10 Sep 2019 • Haibo Yang, Xin Zhang, Minghong Fang, Jia Liu
In this work, we consider the resilience of distributed algorithms based on stochastic gradient descent (SGD) in distributed learning with potentially Byzantine attackers, who could send arbitrary information to the parameter server to disrupt the training process.