Search Results for author: Weiyu Li

Found 17 papers, 5 papers with code

Mean Aggregator Is More Robust Than Robust Aggregators Under Label Poisoning Attacks

1 code implementation21 Apr 2024 Jie Peng, Weiyu Li, Qing Ling

Robustness to malicious attacks is of paramount importance for distributed learning.

BAGS: Building Animatable Gaussian Splatting from a Monocular Video with Diffusion Priors

no code implementations18 Mar 2024 Tingyang Zhang, Qingzhe Gao, Weiyu Li, Libin Liu, Baoquan Chen

In this work, we propose a method to build animatable 3D Gaussian Splatting from monocular video with diffusion priors.

3D Reconstruction

SweetDreamer: Aligning Geometric Priors in 2D Diffusion for Consistent Text-to-3D

1 code implementation4 Oct 2023 Weiyu Li, Rui Chen, Xuelin Chen, Ping Tan

Therefore, we improve the consistency by aligning the 2D geometric priors in diffusion models with well-defined 3D shapes during the lifting, addressing the vast majority of the problem.

3D Generation Text to 3D

Byzantine-Robust Decentralized Stochastic Optimization with Stochastic Gradient Noise-Independent Learning Error

no code implementations10 Aug 2023 Jie Peng, Weiyu Li, Qing Ling

Motivated by this observation, we introduce two variance reduction methods, stochastic average gradient algorithm (SAGA) and loopless stochastic variance-reduced gradient (LSVRG), to Byzantine-robust decentralized stochastic optimization for eliminating the negative effect of the stochastic gradient noise.

Stochastic Optimization

Example-based Motion Synthesis via Generative Motion Matching

1 code implementation1 Jun 2023 Weiyu Li, Xuelin Chen, Peizhuo Li, Olga Sorkine-Hornung, Baoquan Chen

At the heart of our generative framework lies the generative motion matching module, which utilizes the bidirectional visual similarity as a generative cost function to motion matching, and operates in a multi-stage framework to progressively refine a random guess using exemplar motion matches.

Motion Synthesis

A Correlation-Ratio Transfer Learning and Variational Stein's Paradox

no code implementations10 Jun 2022 Lu Lin, Weiyu Li

A basic condition for efficient transfer learning is the similarity between a target model and source models.

Transfer Learning

Ternary and Binary Quantization for Improved Classification

no code implementations31 Mar 2022 Weizhi Lu, Mingrui Chen, Kai Guo, Weiyu Li

Furthermore, this quantization property could be maintained in the random projections of sparse features, if both the features and random projection matrices are sufficiently sparse.

Classification Dimensionality Reduction +1

Cascaded Compressed Sensing Networks: A Reversible Architecture for Layerwise Learning

no code implementations20 Oct 2021 Weizhi Lu, Mingrui Chen, Kai Guo, Weiyu Li

In the letter, we show that target propagation could be achieved by modeling the network s each layer with compressed sensing, without the need of auxiliary networks.

Deep Learning to Ternary Hash Codes by Continuation

no code implementations16 Jul 2021 Mingrui Chen, Weiyu Li, Weizhi Lu

Recently, it has been observed that {0, 1,-1}-ternary codes which are simply generated from deep features by hard thresholding, tend to outperform {-1, 1}-binary codes in image retrieval.

Image Retrieval Quantization +1

Stochastic Alternating Direction Method of Multipliers for Byzantine-Robust Distributed Learning

no code implementations13 Jun 2021 Feng Lin, Weiyu Li, Qing Ling

This paper aims to solve a distributed learning problem under Byzantine attacks.

MoCo-Flow: Neural Motion Consensus Flow for Dynamic Humans in Stationary Monocular Cameras

no code implementations8 Jun 2021 Xuelin Chen, Weiyu Li, Daniel Cohen-Or, Niloy J. Mitra, Baoquan Chen

In this paper, we introduce Neural Motion Consensus Flow (MoCo-Flow), a representation that models dynamic humans in stationary monocular cameras using a 4D continuous time-variant function.

Byzantine-Robust Decentralized Stochastic Optimization over Static and Time-Varying Networks

1 code implementation12 May 2020 Jie Peng, Weiyu Li, Qing Ling

In this paper, we consider the Byzantine-robust stochastic optimization problem defined over decentralized static and time-varying networks, where the agents collaboratively minimize the summation of expectations of stochastic local cost functions, but some of the agents are unreliable due to data corruptions, equipment failures or cyber-attacks.

Stochastic Optimization

Communication-Censored Linearized ADMM for Decentralized Consensus Optimization

no code implementations15 Sep 2019 Weiyu Li, Yaohua Liu, Zhi Tian, Qing Ling

COLA is proven to be convergent when the local cost functions have Lipschitz continuous gradients and the censoring threshold is summable.

CoLA

Communication-Censored Distributed Stochastic Gradient Descent

1 code implementation9 Sep 2019 Weiyu Li, Tianyi Chen, Liping Li, Zhaoxian Wu, Qing Ling

Specifically, in CSGD, the latest mini-batch stochastic gradient at a worker will be transmitted to the server if and only if it is sufficiently informative.

Quantization Stochastic Optimization

Sparse Matrix-based Random Projection for Classification

no code implementations12 Dec 2013 Weizhi Lu, Weiyu Li, Kidiyo Kpalma, Joseph Ronsin

As a typical dimensionality reduction technique, random projection can be simply implemented with linear projection, while maintaining the pairwise distances of high-dimensional data with high probability.

Classification Dimensionality Reduction +2

Cannot find the paper you are looking for? You can Submit a new open access paper.