Search Results for author: Haibo Yang

Found 14 papers, 2 papers with code

VP3D: Unleashing 2D Visual Prompt for Text-to-3D Generation

no code implementations25 Mar 2024 Yang Chen, Yingwei Pan, Haibo Yang, Ting Yao, Tao Mei

In this work, we introduce a novel Visual Prompt-guided text-to-3D diffusion model (VP3D) that explicitly unleashes the visual appearance knowledge in 2D visual prompt to boost text-to-3D generation.

3D Generation Text to 3D +1

3DStyle-Diffusion: Pursuing Fine-grained Text-driven 3D Stylization with 2D Diffusion Models

1 code implementation9 Nov 2023 Haibo Yang, Yang Chen, Yingwei Pan, Ting Yao, Zhineng Chen, Tao Mei

In this work, we propose a new 3DStyle-Diffusion model that triggers fine-grained stylization of 3D meshes with additional controllable appearance and geometric guidance from 2D Diffusion models.

Image Generation

Federated Multi-Objective Learning

no code implementations NeurIPS 2023 Haibo Yang, Zhuqing Liu, Jia Liu, Chaosheng Dong, Michinari Momma

In recent years, multi-objective optimization (MOO) emerges as a foundational problem underpinning many multi-agent multi-task learning applications.

Federated Learning Multi-Task Learning

Taming Fat-Tailed ("Heavier-Tailed'' with Potentially Infinite Variance) Noise in Federated Learning

no code implementations3 Oct 2022 Haibo Yang, Peiwen Qiu, Jia Liu

A key assumption in most existing works on FL algorithms' convergence analysis is that the noise in stochastic first-order information has a finite variance.

Federated Learning

SAGDA: Achieving $\mathcal{O}(ε^{-2})$ Communication Complexity in Federated Min-Max Learning

no code implementations2 Oct 2022 Haibo Yang, Zhuqing Liu, Xin Zhang, Jia Liu

To lower the communication complexity of federated min-max learning, a natural approach is to utilize the idea of infrequent communications (through multiple local updates) same as in conventional federated learning.

Federated Learning

NET-FLEET: Achieving Linear Convergence Speedup for Fully Decentralized Federated Learning with Heterogeneous Data

no code implementations17 Aug 2022 Xin Zhang, Minghong Fang, Zhuqing Liu, Haibo Yang, Jia Liu, Zhengyuan Zhu

Moreover, whether or not the linear speedup for convergence is achievable under fully decentralized FL with data heterogeneity remains an open question.

Federated Learning Open-Ended Question Answering

Over-the-Air Federated Learning with Joint Adaptive Computation and Power Control

no code implementations12 May 2022 Haibo Yang, Peiwen Qiu, Jia Liu, Aylin Yener

In order to fully utilize this advantage while providing comparable learning performance to conventional federated learning that presumes model aggregation via noiseless channels, we consider the joint design of transmission scaling and the number of local iterations at each round, given the power constraint at each edge device.

Federated Learning

Anarchic Federated Learning

no code implementations23 Aug 2021 Haibo Yang, Xin Zhang, Prashant Khanduri, Jia Liu

To satisfy the need for flexible worker participation, we consider a new FL paradigm called "Anarchic Federated Learning" (AFL) in this paper.

Federated Learning

STEM: A Stochastic Two-Sided Momentum Algorithm Achieving Near-Optimal Sample and Communication Complexities for Federated Learning

no code implementations NeurIPS 2021 Prashant Khanduri, Pranay Sharma, Haibo Yang, Mingyi Hong, Jia Liu, Ketan Rajawat, Pramod K. Varshney

Despite extensive research, for a generic non-convex FL problem, it is not clear, how to choose the WNs' and the server's update directions, the minibatch sizes, and the local update frequency, so that the WNs use the minimum number of samples and communication rounds to achieve the desired solution.

Federated Learning

CFedAvg: Achieving Efficient Communication and Fast Convergence in Non-IID Federated Learning

no code implementations14 Jun 2021 Haibo Yang, Jia Liu, Elizabeth S. Bentley

This matches the convergence rate of distributed/federated learning without compression, thus achieving high communication efficiency while not sacrificing learning accuracy in FL.

Federated Learning

Achieving Linear Speedup with Partial Worker Participation in Non-IID Federated Learning

no code implementations ICLR 2021 Haibo Yang, Minghong Fang, Jia Liu

Our results also reveal that the local steps in FL could help the convergence and show that the maximum number of local steps can be improved to $T/m$ in full worker participation.

Federated Learning Open-Ended Question Answering

Byzantine-Resilient Stochastic Gradient Descent for Distributed Learning: A Lipschitz-Inspired Coordinate-wise Median Approach

no code implementations10 Sep 2019 Haibo Yang, Xin Zhang, Minghong Fang, Jia Liu

In this work, we consider the resilience of distributed algorithms based on stochastic gradient descent (SGD) in distributed learning with potentially Byzantine attackers, who could send arbitrary information to the parameter server to disrupt the training process.

Cannot find the paper you are looking for? You can Submit a new open access paper.