Search Results for author: Jialei Wang

Found 29 papers, 5 papers with code

Depth Anything with Any Prior

no code implementations15 May 2025 Zehan Wang, Siyu Chen, Lihe Yang, Jialei Wang, Ziang Zhang, Hengshuang Zhao, Zhou Zhao

To this end, we design a coarse-to-fine pipeline to progressively integrate the two complementary depth sources.

Depth Completion Depth Prediction +4

OmniAudio: Generating Spatial Audio from 360-Degree Video

1 code implementation21 Apr 2025 Huadai Liu, Tianyi Luo, Kaicheng Luo, Qikai Jiang, Peiwen Sun, Jialei Wang, Rongjie Huang, Qian Chen, Wen Wang, Xiangtai Li, Shiliang Zhang, Zhijie Yan, Zhou Zhao, Wei Xue

To generate spatial audio from 360-degree video, we propose a novel framework OmniAudio, which leverages self-supervised pre-training using both spatial audio data (in FOA format) and large-scale non-spatial data.

Audio Generation

FlashAudio: Rectified Flows for Fast and High-Fidelity Text-to-Audio Generation

1 code implementation16 Oct 2024 Huadai Liu, Jialei Wang, Rongjie Huang, Yang Liu, Heng Lu, Zhou Zhao, Wei Xue

To alleviate the inefficient timesteps allocation and suboptimal distribution of noise, FlashAudio optimizes the time distribution of rectified flow with Bifocal Samplers and proposes immiscible flow to minimize the total distance of data-noise pairs in a batch vias assignment.

Audio Generation

GTSinger: A Global Multi-Technique Singing Corpus with Realistic Music Scores for All Singing Tasks

1 code implementation20 Sep 2024 Yu Zhang, Changhao Pan, Wenxiang Guo, RuiQi Li, Zhiyuan Zhu, Jialei Wang, Wenhao Xu, Jingyu Lu, Zhiqing Hong, Chuxin Wang, Lichao Zhang, Jinzheng He, Ziyue Jiang, Yuxin Chen, Chen Yang, Jiecheng Zhou, Xinyu Cheng, Zhou Zhao

The scarcity of high-quality and multi-task singing datasets significantly hinders the development of diverse controllable and personalized singing tasks, as existing singing datasets suffer from low quality, limited diversity of languages and singers, absence of multi-technique information and realistic music scores, and poor task suitability.

All Singing Voice Synthesis +2

MEDIC: Zero-shot Music Editing with Disentangled Inversion Control

no code implementations18 Jul 2024 Huadai Liu, Jialei Wang, Xiangtai Li, Rongjie Huang, Yang Liu, Jiayang Xu, Zhou Zhao

To counteract these issues, we introduce the Disentangled Inversion technique to disentangle the diffusion process into triple branches, rectifying the deviated path of the source branch caused by DDIM inversion.

Audio Generation

AudioLCM: Text-to-Audio Generation with Latent Consistency Models

2 code implementations1 Jun 2024 Huadai Liu, Rongjie Huang, Yang Liu, Hengyuan Cao, Jialei Wang, Xize Cheng, Siqi Zheng, Zhou Zhao

To overcome the convergence issue inherent in LDMs with reduced sample iterations, we propose the Guided Latent Consistency Distillation with a multi-step Ordinary Differential Equation (ODE) solver.

Audio Generation Audio Synthesis

Graph Oracle Models, Lower Bounds, and Gaps for Parallel Stochastic Optimization

no code implementations NeurIPS 2018 Blake Woodworth, Jialei Wang, Adam Smith, Brendan Mcmahan, Nathan Srebro

We suggest a general oracle-based framework that captures different parallel stochastic optimization settings described by a dependency graph, and derive generic lower bounds in terms of this graph.

Stochastic Optimization

Distributed Stochastic Multi-Task Learning with Graph Regularization

no code implementations11 Feb 2018 Weiran Wang, Jialei Wang, Mladen Kolar, Nathan Srebro

We propose methods for distributed graph-based multi-task learning that are based on weighted averaging of messages from other machines.

Multi-Task Learning

Gradient Sparsification for Communication-Efficient Distributed Optimization

no code implementations NeurIPS 2018 Jianqiao Wangni, Jialei Wang, Ji Liu, Tong Zhang

Modern large scale machine learning applications require stochastic optimization algorithms to be implemented on distributed computational architectures.

BIG-bench Machine Learning Distributed Optimization +1

Improved Optimization of Finite Sums with Minibatch Stochastic Variance Reduced Proximal Iterations

no code implementations21 Jun 2017 Jialei Wang, Tong Zhang

We present novel minibatch stochastic optimization methods for empirical risk minimization problems, the methods efficiently leverage variance reduced first-order and sub-sampled higher-order information to accelerate the convergence speed.

Stochastic Optimization

Efficient coordinate-wise leading eigenvector computation

no code implementations25 Feb 2017 Jialei Wang, Weiran Wang, Dan Garber, Nathan Srebro

We develop and analyze efficient "coordinate-wise" methods for finding the leading eigenvector, where each step involves only a vector-vector product.

regression

Stochastic Canonical Correlation Analysis

no code implementations21 Feb 2017 Chao Gao, Dan Garber, Nathan Srebro, Jialei Wang, Weiran Wang

We study the sample complexity of canonical correlation analysis (CCA), \ie, the number of samples needed to estimate the population canonical correlation and directions up to arbitrarily small error.

Stochastic Optimization

Memory and Communication Efficient Distributed Stochastic Optimization with Minibatch-Prox

no code implementations21 Feb 2017 Jialei Wang, Weiran Wang, Nathan Srebro

We present and analyze an approach for distributed stochastic optimization which is statistically optimal and achieves near-linear speedups (up to logarithmic factors).

Stochastic Optimization

Rate Optimal Estimation and Confidence Intervals for High-dimensional Regression with Missing Covariates

no code implementations9 Feb 2017 Yining Wang, Jialei Wang, Sivaraman Balakrishnan, Aarti Singh

We consider the problems of estimation and of constructing component-wise confidence intervals in a sparse high-dimensional linear regression model when some covariates of the design matrix are missing completely at random.

Missing Values regression

Sketching Meets Random Projection in the Dual: A Provable Recovery Algorithm for Big and High-dimensional Data

no code implementations10 Oct 2016 Jialei Wang, Jason D. Lee, Mehrdad Mahdavi, Mladen Kolar, Nathan Srebro

Sketching techniques have become popular for scaling up machine learning algorithms by reducing the sample size or dimensionality of massive data sets, while still maintaining the statistical power of big data.

Warm Starting Bayesian Optimization

no code implementations11 Aug 2016 Matthias Poloczek, Jialei Wang, Peter I. Frazier

We develop a framework for warm-starting Bayesian optimization, that reduces the solution time required to solve an optimization problem that is one in a sequence of related problems.

Bayesian Optimization

Efficient Distributed Learning with Sparsity

no code implementations ICML 2017 Jialei Wang, Mladen Kolar, Nathan Srebro, Tong Zhang

We propose a novel, efficient approach for distributed sparse learning in high-dimensions, where observations are randomly partitioned across machines.

General Classification regression +1

A General Distributed Dual Coordinate Optimization Framework for Regularized Loss Minimization

no code implementations13 Apr 2016 Shun Zheng, Jialei Wang, Fen Xia, Wei Xu, Tong Zhang

In modern large-scale machine learning applications, the training data are often partitioned and stored on multiple machines.

Efficient Globally Convergent Stochastic Optimization for Canonical Correlation Analysis

no code implementations NeurIPS 2016 Weiran Wang, Jialei Wang, Dan Garber, Nathan Srebro

We study the stochastic optimization of canonical correlation analysis (CCA), whose objective is nonconvex and does not decouple over training samples.

Stochastic Optimization

Distributed Multi-Task Learning with Shared Representation

no code implementations7 Mar 2016 Jialei Wang, Mladen Kolar, Nathan Srebro

We study the problem of distributed multi-task learning with shared representation, where each machine aims to learn a separate, but related, task in an unknown shared low-dimensional subspaces, i. e. when the predictor matrix has low rank.

Multi-Task Learning

Multi-Information Source Optimization

no code implementations NeurIPS 2017 Matthias Poloczek, Jialei Wang, Peter I. Frazier

We consider Bayesian optimization of an expensive-to-evaluate black-box objective function, where we also have access to cheaper approximations of the objective.

Bayesian Optimization Reinforcement Learning

Parallel Bayesian Global Optimization of Expensive Functions

no code implementations16 Feb 2016 Jialei Wang, Scott C. Clark, Eric Liu, Peter I. Frazier

We also show that the resulting one-step Bayes optimal algorithm for parallel global optimization finds high-quality solutions with fewer evaluations than a heuristic based on approximately maximizing the q-EI.

Bayesian Optimization global-optimization

Reducing Runtime by Recycling Samples

no code implementations5 Feb 2016 Jialei Wang, Hai Wang, Nathan Srebro

Contrary to the situation with stochastic gradient descent, we argue that when using stochastic methods with variance reduction, such as SDCA, SAG or SVRG, as well as their variants, it could be beneficial to reuse previously used samples instead of fresh samples, even when fresh samples are available.

Distributed Multitask Learning

no code implementations2 Oct 2015 Jialei Wang, Mladen Kolar, Nathan Srebro

We present a communication-efficient estimator based on the debiased lasso and show that it is comparable with the optimal centralized method.

Multi-Task Learning

Bayesian optimization for materials design

1 code implementation3 Jun 2015 Peter I. Frazier, Jialei Wang

We introduce Bayesian optimization, a technique developed for optimizing time-consuming engineering simulations and for fitting machine learning models on large datasets.

Bayesian Optimization regression

Inference for Sparse Conditional Precision Matrices

no code implementations24 Dec 2014 Jialei Wang, Mladen Kolar

observations of a random vector $(X, Z)$, where $X$ is a high-dimensional vector and $Z$ is a low-dimensional index variable, we study the problem of estimating the conditional inverse covariance matrix $\Omega(z) = (E[(X-E[X \mid Z])(X-E[X \mid Z])^T \mid Z=z])^{-1}$ under the assumption that the set of non-zero elements is small and does not depend on the index variable.

Cannot find the paper you are looking for? You can Submit a new open access paper.