no code implementations • 15 May 2025 • Zehan Wang, Siyu Chen, Lihe Yang, Jialei Wang, Ziang Zhang, Hengshuang Zhao, Zhou Zhao
To this end, we design a coarse-to-fine pipeline to progressively integrate the two complementary depth sources.
1 code implementation • 21 Apr 2025 • Huadai Liu, Tianyi Luo, Kaicheng Luo, Qikai Jiang, Peiwen Sun, Jialei Wang, Rongjie Huang, Qian Chen, Wen Wang, Xiangtai Li, Shiliang Zhang, Zhijie Yan, Zhou Zhao, Wei Xue
To generate spatial audio from 360-degree video, we propose a novel framework OmniAudio, which leverages self-supervised pre-training using both spatial audio data (in FOA format) and large-scale non-spatial data.
1 code implementation • 16 Oct 2024 • Huadai Liu, Jialei Wang, Rongjie Huang, Yang Liu, Heng Lu, Zhou Zhao, Wei Xue
To alleviate the inefficient timesteps allocation and suboptimal distribution of noise, FlashAudio optimizes the time distribution of rectified flow with Bifocal Samplers and proposes immiscible flow to minimize the total distance of data-noise pairs in a batch vias assignment.
1 code implementation • 20 Sep 2024 • Yu Zhang, Changhao Pan, Wenxiang Guo, RuiQi Li, Zhiyuan Zhu, Jialei Wang, Wenhao Xu, Jingyu Lu, Zhiqing Hong, Chuxin Wang, Lichao Zhang, Jinzheng He, Ziyue Jiang, Yuxin Chen, Chen Yang, Jiecheng Zhou, Xinyu Cheng, Zhou Zhao
The scarcity of high-quality and multi-task singing datasets significantly hinders the development of diverse controllable and personalized singing tasks, as existing singing datasets suffer from low quality, limited diversity of languages and singers, absence of multi-technique information and realistic music scores, and poor task suitability.
no code implementations • 18 Jul 2024 • Huadai Liu, Jialei Wang, Xiangtai Li, Rongjie Huang, Yang Liu, Jiayang Xu, Zhou Zhao
To counteract these issues, we introduce the Disentangled Inversion technique to disentangle the diffusion process into triple branches, rectifying the deviated path of the source branch caused by DDIM inversion.
2 code implementations • 1 Jun 2024 • Huadai Liu, Rongjie Huang, Yang Liu, Hengyuan Cao, Jialei Wang, Xize Cheng, Siqi Zheng, Zhou Zhao
To overcome the convergence issue inherent in LDMs with reduced sample iterations, we propose the Guided Latent Consistency Distillation with a multi-step Ordinary Differential Equation (ODE) solver.
no code implementations • NeurIPS 2018 • Blake Woodworth, Jialei Wang, Adam Smith, Brendan Mcmahan, Nathan Srebro
We suggest a general oracle-based framework that captures different parallel stochastic optimization settings described by a dependency graph, and derive generic lower bounds in terms of this graph.
no code implementations • 11 Feb 2018 • Weiran Wang, Jialei Wang, Mladen Kolar, Nathan Srebro
We propose methods for distributed graph-based multi-task learning that are based on weighted averaging of messages from other machines.
no code implementations • NeurIPS 2018 • Jianqiao Wangni, Jialei Wang, Ji Liu, Tong Zhang
Modern large scale machine learning applications require stochastic optimization algorithms to be implemented on distributed computational architectures.
no code implementations • 21 Jun 2017 • Jialei Wang, Tong Zhang
We present novel minibatch stochastic optimization methods for empirical risk minimization problems, the methods efficiently leverage variance reduced first-order and sub-sampled higher-order information to accelerate the convergence speed.
no code implementations • ICML 2017 • Jialei Wang, Lin Xiao
We consider empirical risk minimization of linear predictors with convex loss functions.
no code implementations • 25 Feb 2017 • Jialei Wang, Weiran Wang, Dan Garber, Nathan Srebro
We develop and analyze efficient "coordinate-wise" methods for finding the leading eigenvector, where each step involves only a vector-vector product.
no code implementations • 21 Feb 2017 • Chao Gao, Dan Garber, Nathan Srebro, Jialei Wang, Weiran Wang
We study the sample complexity of canonical correlation analysis (CCA), \ie, the number of samples needed to estimate the population canonical correlation and directions up to arbitrarily small error.
no code implementations • 21 Feb 2017 • Jialei Wang, Weiran Wang, Nathan Srebro
We present and analyze an approach for distributed stochastic optimization which is statistically optimal and achieves near-linear speedups (up to logarithmic factors).
no code implementations • 9 Feb 2017 • Yining Wang, Jialei Wang, Sivaraman Balakrishnan, Aarti Singh
We consider the problems of estimation and of constructing component-wise confidence intervals in a sparse high-dimensional linear regression model when some covariates of the design matrix are missing completely at random.
no code implementations • 10 Oct 2016 • Jialei Wang, Jason D. Lee, Mehrdad Mahdavi, Mladen Kolar, Nathan Srebro
Sketching techniques have become popular for scaling up machine learning algorithms by reducing the sample size or dimensionality of massive data sets, while still maintaining the statistical power of big data.
no code implementations • 11 Aug 2016 • Matthias Poloczek, Jialei Wang, Peter I. Frazier
We develop a framework for warm-starting Bayesian optimization, that reduces the solution time required to solve an optimization problem that is one in a sequence of related problems.
no code implementations • ICML 2017 • Jialei Wang, Mladen Kolar, Nathan Srebro, Tong Zhang
We propose a novel, efficient approach for distributed sparse learning in high-dimensions, where observations are randomly partitioned across machines.
no code implementations • 13 Apr 2016 • Shun Zheng, Jialei Wang, Fen Xia, Wei Xu, Tong Zhang
In modern large-scale machine learning applications, the training data are often partitioned and stored on multiple machines.
no code implementations • CVPR 2016 • Jialei Wang, Peder A. Olsen, Andrew R. Conn, Aurelie C. Lozano
We consider the problem of removing and replacing clouds in satellite image sequences, which has a wide range of applications in remote sensing.
no code implementations • NeurIPS 2016 • Weiran Wang, Jialei Wang, Dan Garber, Nathan Srebro
We study the stochastic optimization of canonical correlation analysis (CCA), whose objective is nonconvex and does not decouple over training samples.
no code implementations • 7 Mar 2016 • Jialei Wang, Mladen Kolar, Nathan Srebro
We study the problem of distributed multi-task learning with shared representation, where each machine aims to learn a separate, but related, task in an unknown shared low-dimensional subspaces, i. e. when the predictor matrix has low rank.
no code implementations • NeurIPS 2017 • Matthias Poloczek, Jialei Wang, Peter I. Frazier
We consider Bayesian optimization of an expensive-to-evaluate black-box objective function, where we also have access to cheaper approximations of the objective.
no code implementations • 16 Feb 2016 • Jialei Wang, Scott C. Clark, Eric Liu, Peter I. Frazier
We also show that the resulting one-step Bayes optimal algorithm for parallel global optimization finds high-quality solutions with fewer evaluations than a heuristic based on approximately maximizing the q-EI.
no code implementations • 5 Feb 2016 • Jialei Wang, Hai Wang, Nathan Srebro
Contrary to the situation with stochastic gradient descent, we argue that when using stochastic methods with variance reduction, such as SDCA, SAG or SVRG, as well as their variants, it could be beneficial to reuse previously used samples instead of fresh samples, even when fresh samples are available.
no code implementations • 2 Oct 2015 • Jialei Wang, Mladen Kolar, Nathan Srebro
We present a communication-efficient estimator based on the debiased lasso and show that it is comparable with the optimal centralized method.
1 code implementation • 3 Jun 2015 • Peter I. Frazier, Jialei Wang
We introduce Bayesian optimization, a technique developed for optimizing time-consuming engineering simulations and for fitting machine learning models on large datasets.
no code implementations • 24 Dec 2014 • Jialei Wang, Mladen Kolar
observations of a random vector $(X, Z)$, where $X$ is a high-dimensional vector and $Z$ is a low-dimensional index variable, we study the problem of estimating the conditional inverse covariance matrix $\Omega(z) = (E[(X-E[X \mid Z])(X-E[X \mid Z])^T \mid Z=z])^{-1}$ under the assumption that the set of non-zero elements is small and does not depend on the index variable.