1 code implementation • CVPR 2024 • Shuchen Xue, Zhaoqiang Liu, Fei Chen, Shifeng Zhang, Tianyang Hu, Enze Xie, Zhenguo Li

While this is a significant development, most sampling methods still employ uniform time steps, which is not optimal when using a small number of steps.

no code implementations • 23 Feb 2024 • Jiajun Ma, Shuchen Xue, Tianyang Hu, Wenjia Wang, Zhaoqiang Liu, Zhenguo Li, Zhi-Ming Ma, Kenji Kawaguchi

Surprisingly, the improvement persists when we increase the number of sampling steps and can even surpass the best result from EDM-2 (1. 58) with only 39 NFEs (1. 57).

no code implementations • 21 Feb 2024 • Yihang Gao, Chuanyang Zheng, Enze Xie, Han Shi, Tianyang Hu, Yu Li, Michael K. Ng, Zhenguo Li, Zhaoqiang Liu

Previous works try to explain this from the expressive power and capability perspectives that standard transformers are capable of performing some algorithms.

no code implementations • 16 Sep 2023 • Junren Chen, Shuai Huang, Michael K. Ng, Zhaoqiang Liu

The problem of recovering a signal $\boldsymbol{x} \in \mathbb{R}^n$ from a quadratic system $\{y_i=\boldsymbol{x}^\top\boldsymbol{A}_i\boldsymbol{x},\ i=1,\ldots, m\}$ with full-rank matrices $\boldsymbol{A}_i$ frequently arises in applications such as unassigned distance geometry and sub-wavelength imaging.

1 code implementation • ICCV 2023 • Enze Xie, Lewei Yao, Han Shi, Zhili Liu, Daquan Zhou, Zhaoqiang Liu, Jiawei Li, Zhenguo Li

This paper proposes DiffFit, a parameter-efficient strategy to fine-tune large pre-trained diffusion models that enable fast adaptation to new domains.

1 code implementation • ICCV 2023 • Yuanfeng Ji, Zhe Chen, Enze Xie, Lanqing Hong, Xihui Liu, Zhaoqiang Liu, Tong Lu, Zhenguo Li, Ping Luo

We propose a simple, efficient, yet powerful framework for dense visual predictions based on the conditional diffusion pipeline.

Ranked #3 on Monocular Depth Estimation on SUN-RGBD

1 code implementation • 11 Oct 2022 • Zhaoqiang Liu, Xinshao Wang, Jiulong Liu

In this paper, we study phase retrieval under model misspecification and generative priors.

no code implementations • 21 Sep 2022 • Zhaoqiang Liu, Jun Han

We show that when there is no representation error and the sensing vectors are Gaussian, roughly $O(k \log L)$ samples suffice to ensure that a PGD algorithm converges linearly to a point achieving the optimal statistical rate using arbitrary initialization.

no code implementations • CVPR 2022 • Jiulong Liu, Zhaoqiang Liu

In this paper, we aim to estimate the direction of an underlying signal from its nonlinear observations following the semi-parametric single index model (SIM).

1 code implementation • ICLR 2022 • Zhaoqiang Liu, Jiulong Liu, Subhroshekhar Ghosh, Jun Han, Jonathan Scarlett

We perform experiments on various image datasets for spiked matrix and phase retrieval models, and illustrate performance gains of our method to the classic power method and the truncated power method devised for sparse principal component analysis.

no code implementations • 8 Aug 2021 • Zhaoqiang Liu, Subhroshekhar Ghosh, Jun Han, Jonathan Scarlett

In 1-bit compressive sensing, each measurement is quantized to a single bit, namely the sign of a linear function of an unknown vector, and the goal is to accurately recover the vector.

1 code implementation • NeurIPS 2021 • Zhaoqiang Liu, Subhroshekhar Ghosh, Jonathan Scarlett

We also adapt this result to sparse phase retrieval, and show that $O(s \log n)$ samples are sufficient for a similar guarantee when the underlying signal is $s$-sparse and $n$-dimensional, matching an information-theoretic lower bound.

no code implementations • NeurIPS 2020 • Zhaoqiang Liu, Jonathan Scarlett

We make the assumption of sub-Gaussian measurements, which is satisfied by a wide range of measurement models, such as linear, logistic, 1-bit, and other quantized models.

1 code implementation • ICML 2020 • Zhaoqiang Liu, Selwyn Gomes, Avtansh Tiwari, Jonathan Scarlett

The goal of standard 1-bit compressive sensing is to accurately recover an unknown sparse vector from binary-valued measurements, each indicating the sign of a linear function of the vector.

no code implementations • NeurIPS Workshop Deep_Invers 2019 • Zhaoqiang Liu, Jonathan Scarlett

The goal of standard compressive sensing is to estimate an unknown vector from linear measurements under the assumption of sparsity in some basis.

no code implementations • 28 Aug 2019 • Zhaoqiang Liu, Jonathan Scarlett

It has recently been shown that for compressive sensing, significantly fewer measurements may be required if the sparsity assumption is replaced by the assumption the unknown vector lies near the range of a suitably-chosen generative model.

no code implementations • 23 Oct 2018 • Zhaoqiang Liu

Nonnegative matrix factorization (NMF) has been widely used in machine learning and signal processing because of its non-subtractive, part-based property which enhances interpretability.

no code implementations • 30 Mar 2017 • Zhaoqiang Liu, Vincent Y. F. Tan

These results provide intuition for the informativeness of $k$-means (with and without dimensionality reduction) as an algorithm for learning mixture models.

1 code implementation • 27 Dec 2016 • Zhaoqiang Liu, Vincent Y. F. Tan

We propose a geometric assumption on nonnegative data matrices such that under this assumption, we are able to provide upper bounds (both deterministic and probabilistic) on the relative error of nonnegative matrix factorization (NMF).

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.