no code implementations • 26 Jan 2025 • Xiaomin Li, Mingye Gao, Zhiwei Zhang, Jingxuan Fan, Weiyu Li
Reinforcement Learning from Human Feedback (RLHF) is commonly employed to tailor models to human preferences, especially to improve the safety of outputs from large language models (LLMs).
no code implementations • 12 Jan 2025 • Zhenyu Lei, Yushun Dong, Weiyu Li, Rong Ding, Qi Wang, Jundong Li
Large language models (LLMs) have revolutionized scientific research with their exceptional capabilities and transformed various fields.
1 code implementation • arXiv 2024 • Rui Chen, Jianfeng Zhang, Yixun Liang, Guan Luo, Weiyu Li, Jiarui Liu, Xiu Li, Xiaoxiao Long, Jiashi Feng, Ping Tan
However, the widely adopted uniform point sampling strategy in Shape VAE training often leads to a significant loss of geometric details, limiting the quality of shape reconstruction and downstream generation tasks.
no code implementations • 11 Jun 2024 • Mengfei Li, Xiaoxiao Long, Yixun Liang, Weiyu Li, YuAn Liu, Peng Li, Wenhan Luo, Wenping Wang, Yike Guo
Despite recent advancements in the Large Reconstruction Model (LRM) demonstrating impressive results, when extending its input from single image to multiple images, it exhibits inefficiencies, subpar geometric and texture quality, as well as slower convergence speed than expected.
1 code implementation • 23 May 2024 • Weiyu Li, Jiarui Liu, Rui Chen, Yixun Liang, Xuelin Chen, Ping Tan, Xiaoxiao Long
We present a novel generative 3D modeling system, coined CraftsMan, which can generate high-fidelity 3D geometries with highly varied shapes, regular mesh topologies, and detailed surfaces, and, notably, allows for refining the geometry in an interactive manner.
1 code implementation • 21 Apr 2024 • Jie Peng, Weiyu Li, Stefan Vlaski, Qing Ling
In fact, the learning error of the mean aggregator is proven to be order-optimal in this case.
no code implementations • 18 Mar 2024 • Tingyang Zhang, Qingzhe Gao, Weiyu Li, Libin Liu, Baoquan Chen
In this work, we propose a method to build animatable 3D Gaussian Splatting from monocular video with diffusion priors.
1 code implementation • 4 Oct 2023 • Weiyu Li, Rui Chen, Xuelin Chen, Ping Tan
Therefore, we improve the consistency by aligning the 2D geometric priors in diffusion models with well-defined 3D shapes during the lifting, addressing the vast majority of the problem.
no code implementations • 10 Aug 2023 • Jie Peng, Weiyu Li, Qing Ling
Motivated by this observation, we introduce two variance reduction methods, stochastic average gradient algorithm (SAGA) and loopless stochastic variance-reduced gradient (LSVRG), to Byzantine-robust decentralized stochastic optimization for eliminating the negative effect of the stochastic gradient noise.
1 code implementation • 1 Jun 2023 • Weiyu Li, Xuelin Chen, Peizhuo Li, Olga Sorkine-Hornung, Baoquan Chen
At the heart of our generative framework lies the generative motion matching module, which utilizes the bidirectional visual similarity as a generative cost function to motion matching, and operates in a multi-stage framework to progressively refine a random guess using exemplar motion matches.
no code implementations • CVPR 2023 • Weiyu Li, Xuelin Chen, Jue Wang, Baoquan Chen
We target a 3D generative model for general natural scenes that are typically unique and intricate.
no code implementations • 29 Nov 2022 • Canhong Wen, Ruipeng Dong, Xueqin Wang, Weiyu Li, Heping Zhang
Sparse reduced rank regression is an essential statistical learning method.
no code implementations • 10 Jun 2022 • Lu Lin, Weiyu Li
A basic condition for efficient transfer learning is the similarity between a target model and source models.
no code implementations • 31 Mar 2022 • Weizhi Lu, Mingrui Chen, Kai Guo, Weiyu Li
Furthermore, this quantization property could be maintained in the random projections of sparse features, if both the features and random projection matrices are sufficiently sparse.
no code implementations • 20 Oct 2021 • Weizhi Lu, Mingrui Chen, Kai Guo, Weiyu Li
In the letter, we show that target propagation could be achieved by modeling the network s each layer with compressed sensing, without the need of auxiliary networks.
no code implementations • 16 Jul 2021 • Mingrui Chen, Weiyu Li, Weizhi Lu
Recently, it has been observed that {0, 1,-1}-ternary codes which are simply generated from deep features by hard thresholding, tend to outperform {-1, 1}-binary codes in image retrieval.
no code implementations • 13 Jun 2021 • Feng Lin, Weiyu Li, Qing Ling
This paper aims to solve a distributed learning problem under Byzantine attacks.
no code implementations • 8 Jun 2021 • Xuelin Chen, Weiyu Li, Daniel Cohen-Or, Niloy J. Mitra, Baoquan Chen
In this paper, we introduce Neural Motion Consensus Flow (MoCo-Flow), a representation that models dynamic humans in stationary monocular cameras using a 4D continuous time-variant function.
1 code implementation • 12 May 2020 • Jie Peng, Weiyu Li, Qing Ling
In this paper, we consider the Byzantine-robust stochastic optimization problem defined over decentralized static and time-varying networks, where the agents collaboratively minimize the summation of expectations of stochastic local cost functions, but some of the agents are unreliable due to data corruptions, equipment failures or cyber-attacks.
no code implementations • 15 Sep 2019 • Weiyu Li, Yaohua Liu, Zhi Tian, Qing Ling
COLA is proven to be convergent when the local cost functions have Lipschitz continuous gradients and the censoring threshold is summable.
1 code implementation • 9 Sep 2019 • Weiyu Li, Tianyi Chen, Liping Li, Zhaoxian Wu, Qing Ling
Specifically, in CSGD, the latest mini-batch stochastic gradient at a worker will be transmitted to the server if and only if it is sufficiently informative.
no code implementations • 12 Dec 2013 • Weizhi Lu, Weiyu Li, Kidiyo Kpalma, Joseph Ronsin
As a typical dimensionality reduction technique, random projection can be simply implemented with linear projection, while maintaining the pairwise distances of high-dimensional data with high probability.