1 code implementation • 11 Sep 2024 • Haibo Yang, Yang Chen, Yingwei Pan, Ting Yao, Zhineng Chen, Chong-Wah Ngo, Tao Mei
Despite having tremendous progress in image-to-3D generation, existing methods still struggle to produce multi-view consistent images with high-resolution textures in detail, especially in the paradigm of 2D diffusion that lacks 3D awareness.
no code implementations • 11 Sep 2024 • Haibo Yang, Yang Chen, Yingwei Pan, Ting Yao, Zhineng Chen, Zuxuan Wu, Yu-Gang Jiang, Tao Mei
In the fine stage, DreamMesh jointly manipulates the mesh and refines the texture map, leading to high-quality triangle meshes with high-fidelity textured materials.
no code implementations • 10 Sep 2024 • Zhihao Dou, Xin Hu, Haibo Yang, Zhuqing Liu, Minghong Fang
We then formulate our attack as an optimization problem, aiming to minimize the angular deviation between the embeddings of the transformed input and the modified image or audio file.
no code implementations • 5 Sep 2024 • Peizhong Ju, Haibo Yang, Jia Liu, Yingbin Liang, Ness Shroff
This motivates us to investigate a fundamental question in FL: Can we quantify the impact of data heterogeneity and local updates on the generalization performance for FL as the learning process evolves?
1 code implementation • 24 May 2024 • Zhe Li, Bicheng Ying, Zidong Liu, Haibo Yang
Specifically, in each communication round, the communication costs scale linearly with the model's dimension, which presents a formidable obstacle, especially in large model scenarios.
no code implementations • 5 May 2024 • Tianchen Zhou, FNU Hairi, Haibo Yang, Jia Liu, Tian Tong, Fan Yang, Michinari Momma, Yan Gao
Reinforcement learning with multiple, potentially conflicting objectives is pervasive in real-world applications, while this problem remains theoretically under-explored.
Multi-Objective Reinforcement Learning reinforcement-learning
no code implementations • 4 May 2024 • Haibo Yang, Peiwen Qiu, Prashant Khanduri, Minghong Fang, Jia Liu
A popular approach to mitigate impacts of incomplete client participation is the server-assisted federated learning (SA-FL) framework, where the server is equipped with an auxiliary dataset.
no code implementations • CVPR 2024 • Yang Chen, Yingwei Pan, Haibo Yang, Ting Yao, Tao Mei
In this work, we introduce a novel Visual Prompt-guided text-to-3D diffusion model (VP3D) that explicitly unleashes the visual appearance knowledge in 2D visual prompt to boost text-to-3D generation.
1 code implementation • 9 Nov 2023 • Haibo Yang, Yang Chen, Yingwei Pan, Ting Yao, Zhineng Chen, Tao Mei
In this work, we propose a new 3DStyle-Diffusion model that triggers fine-grained stylization of 3D meshes with additional controllable appearance and geometric guidance from 2D Diffusion models.
no code implementations • NeurIPS 2023 • Haibo Yang, Zhuqing Liu, Jia Liu, Chaosheng Dong, Michinari Momma
In recent years, multi-objective optimization (MOO) emerges as a foundational problem underpinning many multi-agent multi-task learning applications.
no code implementations • 3 Oct 2022 • Haibo Yang, Peiwen Qiu, Jia Liu
A key assumption in most existing works on FL algorithms' convergence analysis is that the noise in stochastic first-order information has a finite variance.
no code implementations • 2 Oct 2022 • Haibo Yang, Zhuqing Liu, Xin Zhang, Jia Liu
To lower the communication complexity of federated min-max learning, a natural approach is to utilize the idea of infrequent communications (through multiple local updates) same as in conventional federated learning.
1 code implementation • 28 Sep 2022 • Haibo Yang, Shengjie Zhang, Xiaoyang Han, Botao Zhao, Yan Ren, Yaru Sheng, Xiao-Yong Zhang
We evaluate the proposed method on 720 T2-FLAIR brain images with small lesions at different noise levels.
no code implementations • 17 Aug 2022 • Xin Zhang, Minghong Fang, Zhuqing Liu, Haibo Yang, Jia Liu, Zhengyuan Zhu
Moreover, whether or not the linear speedup for convergence is achievable under fully decentralized FL with data heterogeneity remains an open question.
no code implementations • 12 May 2022 • Haibo Yang, Peiwen Qiu, Jia Liu, Aylin Yener
In order to fully utilize this advantage while providing comparable learning performance to conventional federated learning that presumes model aggregation via noiseless channels, we consider the joint design of transmission scaling and the number of local iterations at each round, given the power constraint at each edge device.
no code implementations • ICLR 2022 • Prashant Khanduri, Haibo Yang, Mingyi Hong, Jia Liu, Hoi To Wai, Sijia Liu
We analyze the optimization and the generalization performance of the proposed framework for the $\ell_2$ loss.
no code implementations • 23 Aug 2021 • Haibo Yang, Xin Zhang, Prashant Khanduri, Jia Liu
To satisfy the need for flexible worker participation, we consider a new FL paradigm called "Anarchic Federated Learning" (AFL) in this paper.
no code implementations • NeurIPS 2021 • Prashant Khanduri, Pranay Sharma, Haibo Yang, Mingyi Hong, Jia Liu, Ketan Rajawat, Pramod K. Varshney
Despite extensive research, for a generic non-convex FL problem, it is not clear, how to choose the WNs' and the server's update directions, the minibatch sizes, and the local update frequency, so that the WNs use the minimum number of samples and communication rounds to achieve the desired solution.
no code implementations • 14 Jun 2021 • Haibo Yang, Jia Liu, Elizabeth S. Bentley
This matches the convergence rate of distributed/federated learning without compression, thus achieving high communication efficiency while not sacrificing learning accuracy in FL.
no code implementations • ICLR 2021 • Haibo Yang, Minghong Fang, Jia Liu
Our results also reveal that the local steps in FL could help the convergence and show that the maximum number of local steps can be improved to $T/m$ in full worker participation.
no code implementations • 10 Sep 2019 • Haibo Yang, Xin Zhang, Minghong Fang, Jia Liu
In this work, we consider the resilience of distributed algorithms based on stochastic gradient descent (SGD) in distributed learning with potentially Byzantine attackers, who could send arbitrary information to the parameter server to disrupt the training process.