Search Results for author: Qing Qu

Found 40 papers, 20 papers with code

Sim2Real in Reconstructive Spectroscopy: Deep Learning with Augmented Device-Informed Data Simulation

1 code implementation19 Mar 2024 Jiyi Chen, Pengyu Li, Yutong Wang, Pei-Cheng Ku, Qing Qu

This work proposes a deep learning (DL)-based framework, namely Sim2Real, for spectral signal reconstruction in reconstructive spectroscopy, focusing on efficient data sampling and fast inference time.

Data Augmentation

Decoupled Data Consistency with Diffusion Purification for Image Restoration

no code implementations10 Mar 2024 Xiang Li, Soo Min Kwon, Ismail R. Alkhouri, Saiprasad Ravishankar, Qing Qu

To solve image restoration problems, many existing techniques achieve data consistency by incorporating additional likelihood gradient steps into the reverse sampling process of diffusion models.

Deblurring Image Denoising +2

Analysis of Deep Image Prior and Exploiting Self-Guidance for Image Reconstruction

no code implementations6 Feb 2024 Shijun Liang, Evan Bell, Qing Qu, Rongrong Wang, Saiprasad Ravishankar

In this work, we first provide an analysis of how DIP recovers information from undersampled imaging measurements by analyzing the training dynamics of the underlying networks in the kernel regime for different architectures.

Image Inpainting Image Reconstruction +1

Improving Efficiency of Diffusion Models via Multi-Stage Framework and Tailored Multi-Decoder Architectures

no code implementations14 Dec 2023 Huijie Zhang, Yifu Lu, Ismail Alkhouri, Saiprasad Ravishankar, Dogyoon Song, Qing Qu

This is due to the necessity of tracking extensive forward and reverse diffusion trajectories, and employing a large model with numerous parameters across multiple timesteps (i. e., noise levels).

Understanding Deep Representation Learning via Layerwise Feature Compression and Discrimination

1 code implementation6 Nov 2023 Peng Wang, Xiao Li, Can Yaras, Zhihui Zhu, Laura Balzano, Wei Hu, Qing Qu

To the best of our knowledge, this is the first quantitative characterization of feature evolution in hierarchical representations of deep linear networks.

Feature Compression Multi-class Classification +2

Neural Collapse in Multi-label Learning with Pick-all-label Loss

1 code implementation24 Oct 2023 Pengyu Li, Yutong Wang, Xiao Li, Qing Qu

We study deep neural networks for the multi-label classification (MLab) task through the lens of neural collapse (NC).

Multi-class Classification Multi-Label Classification +2

Generalized Neural Collapse for a Large Number of Classes

no code implementations9 Oct 2023 Jiachen Jiang, Jinxin Zhou, Peng Wang, Qing Qu, Dustin Mixon, Chong You, Zhihui Zhu

However, most of the existing empirical and theoretical studies in neural collapse focus on the case that the number of classes is small relative to the dimension of the feature space.

Face Recognition Retrieval

The Emergence of Reproducibility and Consistency in Diffusion Models

no code implementations8 Oct 2023 Huijie Zhang, Jinfan Zhou, Yifu Lu, Minzhe Guo, Peng Wang, Liyue Shen, Qing Qu

In this work, we investigate an intriguing and prevalent phenomenon of diffusion models which we term as "consistent model reproducibility": given the same starting noise input and a deterministic sampler, different diffusion models often yield remarkably similar outputs.

Image Generation Memorization

Investigating the Catastrophic Forgetting in Multimodal Large Language Models

no code implementations19 Sep 2023 Yuexiang Zhai, Shengbang Tong, Xiao Li, Mu Cai, Qing Qu, Yong Jae Lee, Yi Ma

However, catastrophic forgetting, a notorious phenomenon where the fine-tuned model fails to retain similar performance compared to the pre-trained model, still remains an inherent problem in multimodal LLMs (MLLM).

Image Classification Language Modelling +1

Robust Physics-based Deep MRI Reconstruction Via Diffusion Purification

1 code implementation11 Sep 2023 Ismail Alkhouri, Shijun Liang, Rongrong Wang, Qing Qu, Saiprasad Ravishankar

In particular, we present a robustification strategy that improves the resilience of DL-based MRI reconstruction methods by utilizing pretrained diffusion models as noise purifiers.

Adversarial Defense MRI Reconstruction

Solving Inverse Problems with Latent Diffusion Models via Hard Data Consistency

1 code implementation16 Jul 2023 Bowen Song, Soo Min Kwon, Zecheng Zhang, Xinyu Hu, Qing Qu, Liyue Shen

However, training diffusion models in the pixel space are both data-intensive and computationally demanding, which restricts their applicability as priors for high-dimensional real-world data such as medical images.

The Law of Parsimony in Gradient Descent for Learning Deep Linear Networks

1 code implementation1 Jun 2023 Can Yaras, Peng Wang, Wei Hu, Zhihui Zhu, Laura Balzano, Qing Qu

Second, it allows us to better understand deep representation learning by elucidating the linear progressive separation and concentration of representations from shallow to deep layers.

Representation Learning

Principled and Efficient Transfer Learning of Deep Models via Neural Collapse

no code implementations23 Dec 2022 Xiao Li, Sheng Liu, Jinxin Zhou, Xinyu Lu, Carlos Fernandez-Granda, Zhihui Zhu, Qing Qu

As model size continues to grow and access to labeled training data remains limited, transfer learning has become a popular approach in many scientific and engineering fields.

Data Augmentation Self-Supervised Learning +1

Are All Losses Created Equal: A Neural Collapse Perspective

no code implementations4 Oct 2022 Jinxin Zhou, Chong You, Xiao Li, Kangning Liu, Sheng Liu, Qing Qu, Zhihui Zhu

We extend such results and show through global solution and landscape analyses that a broad family of loss functions including commonly used label smoothing (LS) and focal loss (FL) exhibits Neural Collapse.

Neural Collapse with Normalized Features: A Geometric Analysis over the Riemannian Manifold

1 code implementation19 Sep 2022 Can Yaras, Peng Wang, Zhihui Zhu, Laura Balzano, Qing Qu

When training overparameterized deep networks for classification tasks, it has been widely observed that the learned features exhibit a so-called "neural collapse" phenomenon.

Multi-class Classification Representation Learning +1

On the Optimization Landscape of Neural Collapse under MSE Loss: Global Optimality with Unconstrained Features

no code implementations2 Mar 2022 Jinxin Zhou, Xiao Li, Tianyu Ding, Chong You, Qing Qu, Zhihui Zhu

When training deep neural networks for classification tasks, an intriguing empirical phenomenon has been widely observed in the last-layer classifiers and features, where (i) the class means and the last-layer classifiers all collapse to the vertices of a Simplex Equiangular Tight Frame (ETF) up to scaling, and (ii) cross-example within-class variability of last-layer activations collapses to zero.

Robust Training under Label Noise by Over-parameterization

1 code implementation28 Feb 2022 Sheng Liu, Zhihui Zhu, Qing Qu, Chong You

In this work, we propose a principled approach for robust training of over-parameterized deep networks in classification tasks where a proportion of training labels are corrupted.

Learning with noisy labels

Rank Overspecified Robust Matrix Recovery: Subgradient Method and Exact Recovery

no code implementations NeurIPS 2021 Lijun Ding, Liwei Jiang, Yudong Chen, Qing Qu, Zhihui Zhu

We study the robust recovery of a low-rank matrix from sparsely and grossly corrupted Gaussian measurements, with no prior knowledge on the intrinsic rank.

A Geometric Analysis of Neural Collapse with Unconstrained Features

1 code implementation NeurIPS 2021 Zhihui Zhu, Tianyu Ding, Jinxin Zhou, Xiao Li, Chong You, Jeremias Sulam, Qing Qu

In contrast to existing landscape analysis for deep neural networks which is often disconnected from practice, our analysis of the simplified model not only does it explain what kind of features are learned in the last layer, but it also shows why they can be efficiently optimized in the simplified settings, matching the empirical observations in practical deep network architectures.

Convolutional Normalization: Improving Deep Convolutional Network Robustness and Training

1 code implementation NeurIPS 2021 Sheng Liu, Xiao Li, Yuexiang Zhai, Chong You, Zhihui Zhu, Carlos Fernandez-Granda, Qing Qu

Furthermore, we show that our ConvNorm can reduce the layerwise spectral norm of the weight matrices and hence improve the Lipschitzness of the network, leading to easier training and improved robustness for deep ConvNets.

Generative Adversarial Network

From Symmetry to Geometry: Tractable Nonconvex Problems

no code implementations14 Jul 2020 Yuqian Zhang, Qing Qu, John Wright

We highlight the key role of symmetry in shaping the objective landscape and discuss the different roles of rotational and discrete symmetries.

Robust Recovery via Implicit Bias of Discrepant Learning Rates for Double Over-parameterization

1 code implementation NeurIPS 2020 Chong You, Zhihui Zhu, Qing Qu, Yi Ma

This paper shows that with a double over-parameterization for both the low-rank matrix and sparse corruption, gradient descent with discrepant learning rates provably recovers the underlying matrix even without prior knowledge on neither rank of the matrix nor sparsity of the corruption.

Short and Sparse Deconvolution --- A Geometric Approach

1 code implementation ICLR 2020 Yenson Lau, Qing Qu, Han-Wen Kuo, Pengcheng Zhou, Yuqian Zhang, John Wright

Short-and-sparse deconvolution (SaSD) is the problem of extracting localized, recurring motifs in signals with spatial or temporal structure.

Deblurring Image Deblurring +1

Finding the Sparsest Vectors in a Subspace: Theory, Algorithms, and Applications

no code implementations20 Jan 2020 Qing Qu, Zhihui Zhu, Xiao Li, Manolis C. Tsakiris, John Wright, René Vidal

The problem of finding the sparsest vector (direction) in a low dimensional subspace can be considered as a homogeneous variant of the sparse recovery problem, which finds applications in robust subspace recovery, dictionary learning, sparse blind deconvolution, and many other problems in signal processing and machine learning.

Dictionary Learning Representation Learning

Analysis of the Optimization Landscapes for Overcomplete Representation Learning

no code implementations5 Dec 2019 Qing Qu, Yuexiang Zhai, Xiao Li, Yuqian Zhang, Zhihui Zhu

In this work, we show these problems can be formulated as $\ell^4$-norm optimization problems with spherical constraint, and study the geometric properties of their nonconvex optimization landscapes.

Representation Learning

Weakly Convex Optimization over Stiefel Manifold Using Riemannian Subgradient-Type Methods

1 code implementation12 Nov 2019 Xiao Li, Shixiang Chen, Zengde Deng, Qing Qu, Zhihui Zhu, Anthony Man Cho So

To the best of our knowledge, these are the first convergence guarantees for using Riemannian subgradient-type methods to optimize a class of nonconvex nonsmooth functions over the Stiefel manifold.

Dictionary Learning Vocal Bursts Type Prediction

Short-and-Sparse Deconvolution -- A Geometric Approach

1 code implementation28 Aug 2019 Yenson Lau, Qing Qu, Han-Wen Kuo, Pengcheng Zhou, Yuqian Zhang, John Wright

This paper is motivated by recent theoretical advances, which characterize the optimization landscape of a particular nonconvex formulation of SaSD.

Deblurring Image Deblurring +1

A Nonconvex Approach for Exact and Efficient Multichannel Sparse Blind Deconvolution

1 code implementation NeurIPS 2019 Qing Qu, Xiao Li, Zhihui Zhu

We study the multi-channel sparse blind deconvolution (MCS-BD) problem, whose task is to simultaneously recover a kernel $\mathbf a$ and multiple sparse inputs $\{\mathbf x_i\}_{i=1}^p$ from their circulant convolution $\mathbf y_i = \mathbf a \circledast \mathbf x_i $ ($i=1,\cdots, p$).

Computational Efficiency

Convolutional Phase Retrieval via Gradient Descent

no code implementations3 Dec 2017 Qing Qu, Yuqian Zhang, Yonina C. Eldar, John Wright

We study the convolutional phase retrieval problem, of recovering an unknown signal $\mathbf x \in \mathbb C^n $ from $m$ measurements consisting of the magnitude of its cyclic convolution with a given kernel $\mathbf a \in \mathbb C^m $.

Retrieval

Convolutional Phase Retrieval

no code implementations NeurIPS 2017 Qing Qu, Yuqian Zhang, Yonina Eldar, John Wright

We study the convolutional phase retrieval problem, which asks us to recover an unknown signal ${\mathbf x} $ of length $n$ from $m$ measurements consisting of the magnitude of its cyclic convolution with a known kernel $\mathbf a$ of length $m$.

Retrieval

A Geometric Analysis of Phase Retrieval

1 code implementation22 Feb 2016 Ju Sun, Qing Qu, John Wright

complex Gaussian) and the number of measurements is large enough ($m \ge C n \log^3 n$), with high probability, a natural least-squares formulation for GPR has the following benign geometric structure: (1) there are no spurious local minimizers, and all global minimizers are equal to the target signal $\mathbf x$, up to a global phase; and (2) the objective function has a negative curvature around each saddle point.

GPR Retrieval

Complete Dictionary Recovery over the Sphere II: Recovery by Riemannian Trust-region Method

no code implementations15 Nov 2015 Ju Sun, Qing Qu, John Wright

We consider the problem of recovering a complete (i. e., square and invertible) matrix $\mathbf A_0$, from $\mathbf Y \in \mathbb{R}^{n \times p}$ with $\mathbf Y = \mathbf A_0 \mathbf X_0$, provided $\mathbf X_0$ is sufficiently sparse.

Dictionary Learning

Complete Dictionary Recovery over the Sphere I: Overview and the Geometric Picture

no code implementations11 Nov 2015 Ju Sun, Qing Qu, John Wright

We give the first efficient algorithm that provably recovers $\mathbf A_0$ when $\mathbf X_0$ has $O(n)$ nonzeros per column, under suitable probability model for $\mathbf X_0$.

Dictionary Learning

When Are Nonconvex Problems Not Scary?

3 code implementations21 Oct 2015 Ju Sun, Qing Qu, John Wright

In this note, we focus on smooth nonconvex optimization problems that obey: (1) all local minimizers are also global; and (2) around any saddle point or local maximizer, the objective has a negative directional curvature.

Dictionary Learning Retrieval +1

Complete Dictionary Recovery over the Sphere

1 code implementation26 Apr 2015 Ju Sun, Qing Qu, John Wright

We consider the problem of recovering a complete (i. e., square and invertible) matrix $\mathbf A_0$, from $\mathbf Y \in \mathbb R^{n \times p}$ with $\mathbf Y = \mathbf A_0 \mathbf X_0$, provided $\mathbf X_0$ is sufficiently sparse.

Dictionary Learning

Finding a sparse vector in a subspace: Linear sparsity using alternating directions

1 code implementation NeurIPS 2014 Qing Qu, Ju Sun, John Wright

In this paper, we focus on a **planted sparse model** for the subspace: the target sparse vector is embedded in an otherwise random subspace.

Dictionary Learning

Structured Priors for Sparse-Representation-Based Hyperspectral Image Classification

no code implementations16 Jan 2014 Xiaoxia Sun, Qing Qu, Nasser M. Nasrabadi, Trac. D. Tran

Pixel-wise classification, where each pixel is assigned to a predefined class, is one of the most important procedures in hyperspectral image (HSI) analysis.

Classification General Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.