no code implementations • 6 Jun 2024 • Tam Thuc Do, Parham Eftekhar, Seyed Alireza Hosseini, Gene Cheung, Philip Chou

We build interpretable and lightweight transformer-like neural networks by unrolling iterative optimization algorithms that minimize graph smoothness priors -- the quadratic graph Laplacian regularizer (GLR) and the $\ell_1$-norm graph total variation (GTV) -- subject to an interpolation constraint.

no code implementations • 3 Jan 2024 • Yasaman Parhizkar, Gene Cheung, Andrew W. Eckford

To extract knowledge from the cell firings, in this paper we learn an interpretable graph-based classifier from data to predict the firings of ganglion cells in response to visual stimuli.

no code implementations • 22 Nov 2023 • Tam Thuc Do, Philip A. Chou, Gene Cheung

We study 3D point cloud attribute compression via a volumetric approach: assuming point cloud geometry is known at both encoder and decoder, parameters $\theta$ of a continuous attribute function $f: \mathbb{R}^3 \mapsto \mathbb{R}$ are quantized to $\hat{\theta}$ and encoded, so that discrete samples $f_{\hat{\theta}}(\mathbf{x}_i)$ can be recovered at known 3D points $\mathbf{x}_i \in \mathbb{R}^3$ at the decoder.

no code implementations • 22 Nov 2023 • Tam Thuc Do, Philip A. Chou, Gene Cheung

We extend a previous study on 3D point cloud attribute compression scheme that uses a volumetric approach: given a target volumetric attribute function $f : \mathbb{R}^3 \mapsto \mathbb{R}$, we quantize and encode parameters $\theta$ that characterize $f$ at the encoder, for reconstruction $f_{\hat{\theta}}(\mathbf(x))$ at known 3D points $\mathbf(x)$ at the decoder.

1 code implementation • 18 Sep 2023 • Niruhan Viswarupan, Gene Cheung, Fengbo Lan, Michael Brown

A noise-corrupted image often requires interpolation.

no code implementations • 5 Jul 2023 • Yeganeh Gharedaghi, Gene Cheung, Xianming Liu

Images captured in poorly lit conditions are often corrupted by acquisition noise.

no code implementations • 4 Jul 2023 • Chinthaka Dinesh, Junfei Wang, Gene Cheung, Pirathayini Srikantha

In order to maintain stable grid operations, system monitoring and control processes require the computation of grid states (e. g. voltage magnitude and angles) at high granularity.

no code implementations • 2 Jun 2023 • Saghar Bagheri, Gene Cheung, Tim Eadie

Specifically, we first show that greedily removing an edge at a time that induces the minimal change in the second eigenvalue leads to a sparse graph with good GCN performance.

no code implementations • 1 Apr 2023 • Tam Thuc Do, Philip A. Chou, Gene Cheung

We study 3D point cloud attribute compression using a volumetric approach: given a target volumetric attribute function $f : \mathbb{R}^3 \rightarrow \mathbb{R}$, we quantize and encode parameter vector $\theta$ that characterizes $f$ at the encoder, for reconstruction $f_{\hat{\theta}}(\mathbf{x})$ at known 3D points $\mathbf{x}$'s at the decoder.

no code implementations • 25 Oct 2022 • Yuejiang Li, Hong Vicky Zhao, Gene Cheung

To minimize worst-case reconstruction error of the linear system solution $\mathbf{x}^* = \mathbf{C}^{-1} \mathbf{H}^\top \mathbf{y}$ with symmetric coefficient matrix $\mathbf{C} = \mathbf{H}^\top \mathbf{H} + \mu \mathbf{L}_{rw}^\top \mathbf{L}_{rw}$, the sampling objective is to choose $\mathbf{H}$ to maximize the smallest eigenvalue $\lambda_{\min}(\mathbf{C})$ of $\mathbf{C}$.

no code implementations • 18 Aug 2022 • Chinthaka Dinesh, Gene Cheung, Saghar Bagheri, Ivan V. Bajic

Experimental results show that our signed graph sampling method outperformed existing fast sampling schemes noticeably on various datasets.

no code implementations • 4 Aug 2022 • Saghar Bagheri, Chinthaka Dinesh, Gene Cheung, Timothy Eadie

Prediction of annual crop yields at a county granularity is important for national food production and price stability.

no code implementations • 9 Jun 2022 • Fei Chen, Gene Cheung, Xue Zhang

In this paper, focusing on manifold graphs -- collections of uniform discrete samples on low-dimensional continuous manifolds -- we generalize GLR to gradient graph Laplacian regularizer (GGLR) that promotes planar / piecewise planar (PWP) signal reconstruction.

no code implementations • 2 Mar 2022 • Saghar Bagheri, Tam Thuc Do, Gene Cheung, Antonio Ortega

Transform coding to sparsify signal representations remains crucial in an image compression pipeline.

no code implementations • 28 Feb 2022 • Jin Zeng, Yang Liu, Gene Cheung, Wei Hu

Specifically, based on a spectral analysis of multilayer GCN output, we derive a spectrum prior for the graph Laplacian matrix $\mathbf{L}$ to robustify the model expressiveness against over-smoothing.

no code implementations • 15 Dec 2021 • Fei Chen, Gene Cheung, Xue Zhang

Experiments show that our embedding is among the fastest in the literature, while producing the best clustering performance for manifold graphs.

no code implementations • 9 Nov 2021 • Xue Zhang, Gene Cheung, Jiahao Pang, Yash Sanghvi, Abhiram Gnanasambandam, Stanley H. Chan

Specifically, we model depth formation as a combined process of signal-dependent noise addition and non-uniform log-based quantization.

no code implementations • 21 Oct 2021 • Sadid Sahami, Gene Cheung, Chia-Wen Lin

We prove that, after partitioning $\mathcal{G}$ into $Q$ sub-graphs $\{\mathcal{G}^q\}^Q_{q=1}$, the smallest Gershgorin circle theorem (GCT) lower bound of $Q$ corresponding coefficient matrices -- $\min_q \lambda^-_{\min}(\mathbf{B}^q)$ -- is a lower bound for $\lambda_{\min}(\mathbf{B})$.

no code implementations • 6 Oct 2021 • Fen Wang, Gene Cheung, Taihao Li, Ying Du, Yu-Ping Ruan

Sensor placement for linear inverse problems is the selection of locations to assign sensors so that the entire physical signal can be well recovered from partial observations.

no code implementations • 10 Sep 2021 • Cheng Yang, Gene Cheung, Wai-tian Tan, Guangtao Zhai

Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.

no code implementations • NeurIPS 2021 • Cheng Yang, Gene Cheung, Guangtao Zhai

We repose the SDR dual for solution $\bar{\mathbf{H}}$, then replace the PSD cone constraint $\bar{\mathbf{H}} \succeq 0$ with linear constraints derived from GDPA -- sufficient conditions to ensure $\bar{\mathbf{H}}$ is PSD -- so that the optimization becomes an LP per iteration.

no code implementations • 10 Mar 2021 • Chinthaka Dinesh, Gene Cheung, Ivan Bajic

Specifically, to articulate a sampling objective, we first assume a super-resolution (SR) method based on feature graph Laplacian regularization (FGLR) that reconstructs the original high-resolution PC, given 3D points chosen by a sampling matrix $\H$.

no code implementations • 15 Feb 2021 • Yung-Hsuan Chao, Haoran Hong, Gene Cheung, Antonio Ortega

Using a conventional Bayer pattern, data captured at each pixel is a single color component (R, G or B). The sensed data then undergoes demosaicking (interpolation of RGB components per pixel) and conversion to an array of sub-aperture images (SAIs).

no code implementations • 25 Jan 2021 • Fei Chen, Gene Cheung, Xue Zhang

In the graph signal processing (GSP) literature, it has been shown that signal-dependent graph Laplacian regularizer (GLR) can efficiently promote piecewise constant (PWC) signal reconstruction for various image restoration tasks.

no code implementations • 25 Oct 2020 • Saghar Bagheri, Gene Cheung, Antonio Ortega, Fen Wang

Learning a suitable graph is an important precursor to many graph signal processing (GSP) pipelines, such as graph spectral signal compression and denoising.

1 code implementation • 21 Oct 2020 • Huy Vu, Gene Cheung, Yonina C. Eldar

While deep learning (DL) architectures like convolutional neural networks (CNNs) have enabled effective solutions in image denoising, in general their implementations overly rely on training data, lack interpretability, and require tuning of a large parameter set.

1 code implementation • 15 Jun 2020 • Cheng Yang, Gene Cheung, Wei Hu

Given a convex and differentiable objective $Q(\M)$ for a real symmetric matrix $\M$ in the positive definite (PD) cone -- used to compute Mahalanobis distances -- we propose a fast general metric learning framework that is entirely projection-free.

no code implementations • 9 Mar 2020 • Yuichi Tanaka, Yonina C. Eldar, Antonio Ortega, Gene Cheung

In this article, we review current progress on sampling over graphs focusing on theory and potential applications.

no code implementations • 28 Jan 2020 • Cheng Yang, Gene Cheung, Wei Hu

We propose a fast general projection-free metric learning framework, where the minimization objective $\min_{\textbf{M} \in \mathcal{S}} Q(\textbf{M})$ is a convex differentiable function of the metric matrix $\textbf{M}$, and $\textbf{M}$ resides in the set $\mathcal{S}$ of generalized graph Laplacian matrices for connected graphs with positive edge weights and node degrees.

no code implementations • 6 Dec 2019 • Minxiang Ye, Vladimir Stankovic, Lina Stankovic, Gene Cheung

In this paper, we propose a robust binary classifier, based on CNNs, to learn deep metric functions, which are then used to construct an optimal underlying graph structure used to clean noisy labels via graph Laplacian regularization (GLR).

no code implementations • 22 Jul 2019 • Wei Hu, Xiang Gao, Gene Cheung, Zongming Guo

In this work, we assume instead the availability of a relevant feature vector $\mathbf{f}_i$ per node $i$, from which we compute an optimal feature graph via optimization of a feature metric.

1 code implementation • 31 Jul 2018 • Jin Zeng, Jiahao Pang, Wenxiu Sun, Gene Cheung

In this work, we combine the robustness merit of model-based approaches and the learning power of data-driven approaches for real image denoising.

1 code implementation • 22 Jul 2018 • Chih-Chung Hsu, Chia-Wen Lin, Weng-Tai Su, Gene Cheung

Despite generative adversarial networks (GANs) can hallucinate photo-realistic high-resolution (HR) faces from low-resolution (LR) faces, they cannot guarantee preserving the identities of hallucinated HR faces, making the HR faces poorly recognizable.

no code implementations • 20 Mar 2018 • Jin Zeng, Gene Cheung, Michael Ng, Jiahao Pang, Cheng Yang

Due to discrete observations of the patches on the manifold, we approximate the manifold dimension computation defined in the continuous domain with a patch-based graph Laplacian regularizer and propose a new discrete patch distance measure to quantify the similarity between two same-sized surface patches for graph construction that is robust to noise.

no code implementations • 22 Feb 2018 • Yuanchao Bai, Gene Cheung, Xian-Ming Liu, Wen Gao

We leverage the new graph spectral interpretation for RGTV to design an efficient algorithm that solves for the skeleton image and the blur kernel alternately.

no code implementations • 20 Feb 2018 • Qi Chang, Gene Cheung, Yao Zhao, Xiaolong Li, Rongrong Ni

If sufficiently smooth, we pose a maximum a posteriori (MAP) problem using either a quadratic Laplacian regularizer or a graph total variation (GTV) term as signal prior.

no code implementations • 24 Dec 2017 • Yuanchao Bai, Gene Cheung, Xian-Ming Liu, Wen Gao

The problem can be solved in two parts: i) estimate a blur kernel from the blurry image, and ii) given estimated blur kernel, de-convolve blurry input to restore the target image.

no code implementations • 30 Apr 2017 • Amin Zheng, Gene Cheung, Dinei Florencio

We first prove theoretically that in general a joint denoising / compression approach can outperform a separate two-stage approach that first denoises then encodes contours lossily.

no code implementations • 10 Feb 2017 • Weng-Tai Su, Gene Cheung, Chia-Wen Lin

Recent advent in graph signal processing (GSP) has led to the development of new graph-based transforms and wavelets for image / video coding, where the underlying graph describes inter-pixel correlations.

no code implementations • 15 Nov 2016 • Gene Cheung, Weng-Tai Su, Yu Mao, Chia-Wen Lin

In response, we derive an optimal perturbation matrix $\boldsymbol{\Delta}$ - based on a fast lower-bound computation of the minimum eigenvalue of $\mathbf{L}$ via a novel application of the Haynsworth inertia additivity formula---so that $\mathbf{L} + \boldsymbol{\Delta}$ is positive semi-definite, resulting in a stable signal prior.

no code implementations • 7 Jul 2016 • Xianming Liu, Gene Cheung, Xiaolin Wu, Debin Zhao

In this paper, we combine three image priors---Laplacian prior for DCT coefficients, sparsity prior and graph-signal smoothness prior for image patches---to construct an efficient JPEG soft decoding algorithm.

no code implementations • 27 Apr 2016 • Jiahao Pang, Gene Cheung

Focusing on image denoising, we derive an optimal metric space assuming non-local self-similarity of pixel patches, leading to an optimal graph Laplacian regularizer for denoising in the discrete domain.

no code implementations • 25 Feb 2014 • Pengfei Wan, Gene Cheung, Philip A. Chou, Dinei Florencio, Cha Zhang, Oscar C. Au

In texture-plus-depth representation of a 3D scene, depth maps from different camera viewpoints are typically lossily compressed via the classical transform coding / coefficient quantization paradigm.

no code implementations • 18 Oct 2012 • Thomas Maugey, Ismael Daribo, Gene Cheung, Pascal Frossard

In this paper, we propose a novel multiview data representation that permits to satisfy bandwidth and storage constraints in an interactive multiview streaming system.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.