Search Results for author: Junhong Lin

Found 18 papers, 2 papers with code

Beyond Night Visibility: Adaptive Multi-Scale Fusion of Infrared and Visible Images

no code implementations2 Mar 2024 Shufan Pei, Junhong Lin, Wenxi Liu, Tiesong Zhao, Chia-Wen Lin

Thereby, we obtain an image free of low light and light effects, which improves the performance of nighttime object detection.

object-detection Object Detection

Large Language Models for Forecasting and Anomaly Detection: A Systematic Literature Review

no code implementations15 Feb 2024 Jing Su, Chufeng Jiang, Xin Jin, Yuxin Qiao, Tingsong Xiao, Hongda Ma, Rong Wei, Zhi Jing, Jiajun Xu, Junhong Lin

This systematic literature review comprehensively examines the application of Large Language Models (LLMs) in forecasting and anomaly detection, highlighting the current state of research, inherent challenges, and prospective future directions.

Anomaly Classification Anomaly Detection +3

PAC-Bayesian Adversarially Robust Generalization Bounds for Graph Neural Network

no code implementations6 Feb 2024 Tan Sun, Junhong Lin

As corollaries, we derive better PAC-Bayesian robust generalization bounds for GCN in the standard setting, which improve the bounds in (Liao et al., 2020) by avoiding exponential dependence on the maximum node degree.

Generalization Bounds

On Convergence of Adam for Stochastic Optimization under Relaxed Assumptions

no code implementations6 Feb 2024 Yusu Hong, Junhong Lin

The Adaptive Momentum Estimation (Adam) algorithm is highly effective in training various deep learning tasks.

Stochastic Optimization

High Probability Convergence of Adam Under Unbounded Gradients and Affine Variance Noise

no code implementations3 Nov 2023 Yusu Hong, Junhong Lin

To overcome these limitations, we provide a deep analysis and show that Adam could converge to the stationary point in high probability with a rate of $\mathcal{O}\left({\rm poly}(\log T)/\sqrt{T}\right)$ under coordinate-wise "affine" variance noise, not requiring any bounded gradient assumption and any problem-dependent knowledge in prior to tune hyper-parameters.

Unsupervised detection of small hyperreflective features in ultrahigh resolution optical coherence tomography

no code implementations26 Mar 2023 Marcel Reimann, Jungeun Won, Hiroyuki Takahashi, Antonio Yaghy, Yunchan Hwang, Stefan Ploner, Junhong Lin, Jessica Girgis, Kenneth Lam, Siyu Chen, Nadia K. Waheed, Andreas Maier, James G. Fujimoto

Recent advances in optical coherence tomography such as the development of high speed ultrahigh resolution scanners and corresponding signal processing techniques may reveal new potential biomarkers in retinal diseases.

LMQFormer: A Laplace-Prior-Guided Mask Query Transformer for Lightweight Snow Removal

1 code implementation10 Oct 2022 Junhong Lin, Nanfeng Jiang, Zhentao Zhang, Weiling Chen, Tiesong Zhao

Secondly, we design a Mask Query Transformer (MQFormer) to remove snow with the coarse mask, where we use two parallel encoders and a hybrid decoder to learn extensive snow features under lightweight requirements.

Snow Removal

Kernel Conjugate Gradient Methods with Random Projections

no code implementations5 Nov 2018 Junhong Lin, Volkan Cevher

We propose and study kernel conjugate gradient methods (KCGM) with random projections for least-squares regression over a separable Hilbert space.

regression

Optimal Distributed Learning with Multi-pass Stochastic Gradient Methods

no code implementations ICML 2018 Junhong Lin, Volkan Cevher

We study generalization properties of distributed algorithms in the setting of nonparametric regression over a reproducing kernel Hilbert space (RKHS).

regression

Optimal Rates of Sketched-regularized Algorithms for Least-Squares Regression over Hilbert Spaces

no code implementations ICML 2018 Junhong Lin, Volkan Cevher

We investigate regularized algorithms combining with projection for least-squares regression problem over a Hilbert space, covering nonparametric regression over a reproducing kernel Hilbert space.

regression

Optimal Convergence for Distributed Learning with Stochastic Gradient Methods and Spectral Algorithms

no code implementations22 Jan 2018 Junhong Lin, Volkan Cevher

We then extend our results to spectral-regularization algorithms (SRA), including kernel ridge regression (KRR), kernel principal component analysis, and gradient methods.

regression

Optimal Rates for Spectral Algorithms with Least-Squares Regression over Hilbert Spaces

no code implementations20 Jan 2018 Junhong Lin, Alessandro Rudi, Lorenzo Rosasco, Volkan Cevher

In this paper, we study regression problems over a separable Hilbert space with the square loss, covering non-parametric regression over a reproducing kernel Hilbert space.

regression

Optimal Rates for Learning with Nyström Stochastic Gradient Methods

no code implementations21 Oct 2017 Junhong Lin, Lorenzo Rosasco

In the setting of nonparametric regression, we propose and study a combination of stochastic gradient methods with Nystr\"om subsampling, allowing multiple passes over the data and mini-batches.

regression

Generalization Properties of Doubly Stochastic Learning Algorithms

no code implementations3 Jul 2017 Junhong Lin, Lorenzo Rosasco

In this paper, we provide an in-depth theoretical analysis for different variants of doubly stochastic learning algorithms within the setting of nonparametric regression in a reproducing kernel Hilbert space and considering the square loss.

Optimal Learning for Multi-pass Stochastic Gradient Methods

no code implementations NeurIPS 2016 Junhong Lin, Lorenzo Rosasco

We analyze the learning properties of the stochastic gradient method when multiple passes over the data and mini-batches are allowed.

Optimal Rates for Multi-pass Stochastic Gradient Methods

no code implementations28 May 2016 Junhong Lin, Lorenzo Rosasco

As a byproduct, we derive optimal convergence results for batch gradient methods (even in the non-attainable cases).

Generalization Properties and Implicit Regularization for Multiple Passes SGM

1 code implementation26 May 2016 Junhong Lin, Raffaello Camoriano, Lorenzo Rosasco

We study the generalization properties of stochastic gradient methods for learning with convex loss functions and linearly parameterized functions.

Iterative Regularization for Learning with Convex Loss Functions

no code implementations31 Mar 2015 Junhong Lin, Lorenzo Rosasco, Ding-Xuan Zhou

We consider the problem of supervised learning with convex loss functions and propose a new form of iterative regularization based on the subgradient method.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.