Search Results for author: Shi Pu

Found 18 papers, 3 papers with code

Distributed Random Reshuffling Methods with Improved Convergence

no code implementations21 Jun 2023 Kun Huang, Linli Zhou, Shi Pu

Notably, both results are comparable to the convergence rates of centralized RR methods (up to constant factors depending on the network topology) and outperform those of previous distributed random reshuffling algorithms.

Distributed Optimization

Text-to-Image Diffusion Models can be Easily Backdoored through Multimodal Data Poisoning

1 code implementation7 May 2023 Shengfang Zhai, Yinpeng Dong, Qingni Shen, Shi Pu, Yuejian Fang, Hang Su

To gain a better understanding of the training process and potential risks of text-to-image synthesis, we perform a systematic investigation of backdoor attack on text-to-image diffusion models and propose BadT2I, a general multimodal backdoor attack framework that tampers with image synthesis in diverse semantic levels.

Backdoor Attack backdoor defense +2

One Transformer Fits All Distributions in Multi-Modal Diffusion at Scale

3 code implementations12 Mar 2023 Fan Bao, Shen Nie, Kaiwen Xue, Chongxuan Li, Shi Pu, Yaole Wang, Gang Yue, Yue Cao, Hang Su, Jun Zhu

Inspired by the unified view, UniDiffuser learns all distributions simultaneously with a minimal modification to the original diffusion model -- perturbs data in all modalities instead of a single modality, inputs individual timesteps in different modalities, and predicts the noise of all modalities instead of a single modality.

Text-to-Image Generation

Distributed Stochastic Optimization under a General Variance Condition

no code implementations30 Jan 2023 Kun Huang, Xiao Li, Shi Pu

Distributed stochastic optimization has drawn great attention recently due to its effectiveness in solving large-scale machine learning problems.

Stochastic Optimization

CEDAS: A Compressed Decentralized Stochastic Gradient Method with Improved Convergence

no code implementations14 Jan 2023 Kun Huang, Shi Pu

In this paper, we consider solving the distributed optimization problem over a multi-agent network under the communication restricted setting.

Distributed Optimization

Alignment-Uniformity aware Representation Learning for Zero-shot Video Classification

1 code implementation CVPR 2022 Shi Pu, Kaili Zhao, Mao Zheng

Further, we synthesize features of unseen classes by proposing a class generator that interpolates and extrapolates the features of seen classes.

Representation Learning Video Classification +1

Distributed Random Reshuffling over Networks

no code implementations31 Dec 2021 Kun Huang, Xiao Li, Andre Milzarek, Shi Pu, Junwen Qiu

We show that D-RR inherits favorable characteristics of RR for both smooth strongly convex and smooth nonconvex objective functions.

Distributed Optimization

Provably Accelerated Decentralized Gradient Method Over Unbalanced Directed Graphs

no code implementations26 Jul 2021 Zhuoqing Song, Lei Shi, Shi Pu, Ming Yan

We consider the decentralized optimization problem, where a network of $n$ agents aims to collaboratively minimize the average of their individual smooth and convex objective functions through peer-to-peer communication in a directed graph.

Compressed Gradient Tracking for Decentralized Optimization Over General Directed Networks

no code implementations14 Jun 2021 Zhuoqing Song, Lei Shi, Shi Pu, Ming Yan

The second algorithm is a broadcast-like version of CPP (B-CPP), and it also achieves linear convergence rate under the same conditions on the objective functions.

Improving the Transient Times for Distributed Stochastic Gradient Methods

no code implementations11 May 2021 Kun Huang, Shi Pu

To the best of our knowledge, EDAS achieves the shortest transient time when the average of the $n$ cost functions is strongly convex and each cost function is smooth.

Distributed Optimization

Multimodal Topic Learning for Video Recommendation

no code implementations26 Oct 2020 Shi Pu, Yijiang He, Zheng Li, Mao Zheng

Existing video recommendation systems directly exploit features from different modalities (e. g., user personal data, user behavior data, video titles, video tags, and visual contents) to input deep neural networks, while expecting the networks to online mine user-preferred topics implicitly from these features.

Computational Efficiency Recommendation Systems

A general framework for decentralized optimization with first-order methods

no code implementations12 Sep 2020 Ran Xin, Shi Pu, Angelia Nedić, Usman A. Khan

Decentralized optimization to minimize a finite sum of functions over a network of nodes has been a significant focus within control and signal processing research due to its natural relevance to optimal control and signal estimation problems.

BIG-bench Machine Learning

Asymptotic Network Independence in Distributed Stochastic Optimization for Machine Learning

no code implementations28 Jun 2019 Shi Pu, Alex Olshevsky, Ioannis Ch. Paschalidis

We provide a discussion of several recent results which, in certain scenarios, are able to overcome a barrier in distributed stochastic optimization for machine learning.

BIG-bench Machine Learning Distributed Optimization

A Non-Asymptotic Analysis of Network Independence for Distributed Stochastic Gradient Descent

no code implementations6 Jun 2019 Shi Pu, Alex Olshevsky, Ioannis Ch. Paschalidis

This paper is concerned with minimizing the average of $n$ cost functions over a network, in which agents may communicate and exchange information with their peers in the network.

Deep Attentive Tracking via Reciprocative Learning

no code implementations NeurIPS 2018 Shi Pu, Yibing Song, Chao Ma, Honggang Zhang, Ming-Hsuan Yang

Visual attention, derived from cognitive neuroscience, facilitates human perception on the most pertinent subset of the sensory data.

Visual Tracking

Swarming for Faster Convergence in Stochastic Optimization

no code implementations11 Jun 2018 Shi Pu, Alfredo Garcia

We study a distributed framework for stochastic optimization which is inspired by models of collective motion found in nature (e. g., swarming) with mild communication requirements.

Stochastic Optimization

Distributed Stochastic Gradient Tracking Methods

no code implementations25 May 2018 Shi Pu, Angelia Nedić

In this paper, we study the problem of distributed multi-agent optimization over a network, where each agent possesses a local cost function that is smooth and strongly convex.

A Distributed Stochastic Gradient Tracking Method

no code implementations21 Mar 2018 Shi Pu, Angelia Nedić

In this paper, we study the problem of distributed multi-agent optimization over a network, where each agent possesses a local cost function that is smooth and strongly convex.

Optimization and Control Distributed, Parallel, and Cluster Computing Multiagent Systems

Cannot find the paper you are looking for? You can Submit a new open access paper.