1 code implementation • 1 Jun 2024 • Elisha Dayag, Kevin Bui, Fredrick Park, Jack Xin
Based on transformed $\ell_1$ regularization, transformed total variation (TTV) has robust image recovery that is competitive with other nonconvex total variation (TV) regularizers, such as TV$^p$, $0<p<1$.
no code implementations • 2 Jul 2023 • Kevin Bui, Fanghui Xue, Fredrick Park, Yingyong Qi, Jack Xin
This time-consuming, three-step process is a result of using subgradient descent to train CNNs.
1 code implementation • 1 Jul 2023 • Kevin Bui, Yifei Lou, Fredrick Park, Jack Xin
Poisson noise commonly occurs in images captured by photon-limited imaging systems such as in astronomy and medicine.
1 code implementation • 6 Jan 2023 • Kevin Bui, Yifei Lou, Fredrick Park, Jack Xin
In this paper, we aim to segment an image degraded by blur and Poisson noise.
1 code implementation • 21 Feb 2022 • Kevin Bui, Yifei Lou, Fredrick Park, Jack Xin
In this paper, we design an efficient, multi-stage image segmentation framework that incorporates a weighted difference of anisotropic and isotropic total variation (AITV).
1 code implementation • 3 Oct 2020 • Kevin Bui, Fredrick Park, Shuai Zhang, Yingyong Qi, Jack Xin
Network slimming with T$\ell_1$ regularization also outperforms the latest Bayesian modification of network slimming in compressing a CNN architecture in terms of memory storage while preserving its model accuracy after channel pruning.
no code implementations • 5 Sep 2020 • Jacob Householder, Andrew Householder, John Paul Gomez-Reed, Fredrick Park, Shuai Zhang
While tests do exist for COVID-19, the goal of our research is to explore other methods of identifying infected individuals.
1 code implementation • 9 May 2020 • Kevin Bui, Fredrick Park, Yifei Lou, Jack Xin
In a class of piecewise-constant image segmentation models, we propose to incorporate a weighted difference of anisotropic and isotropic total variation (AITV) to regularize the partition boundaries in an image.
no code implementations • 17 Dec 2019 • Kevin Bui, Fredrick Park, Shuai Zhang, Yingyong Qi, Jack Xin
Deepening and widening convolutional neural networks (CNNs) significantly increases the number of trainable weight parameters by adding more convolutional layers and feature maps per layer, respectively.