no code implementations • 2 Mar 2024 • Shufan Pei, Junhong Lin, Wenxi Liu, Tiesong Zhao, Chia-Wen Lin
Thereby, we obtain an image free of low light and light effects, which improves the performance of nighttime object detection.
no code implementations • 15 Feb 2024 • Jing Su, Chufeng Jiang, Xin Jin, Yuxin Qiao, Tingsong Xiao, Hongda Ma, Rong Wei, Zhi Jing, Jiajun Xu, Junhong Lin
This systematic literature review comprehensively examines the application of Large Language Models (LLMs) in forecasting and anomaly detection, highlighting the current state of research, inherent challenges, and prospective future directions.
no code implementations • 6 Feb 2024 • Tan Sun, Junhong Lin
As corollaries, we derive better PAC-Bayesian robust generalization bounds for GCN in the standard setting, which improve the bounds in (Liao et al., 2020) by avoiding exponential dependence on the maximum node degree.
no code implementations • 6 Feb 2024 • Yusu Hong, Junhong Lin
The Adaptive Momentum Estimation (Adam) algorithm is highly effective in training various deep learning tasks.
no code implementations • 3 Nov 2023 • Yusu Hong, Junhong Lin
To overcome these limitations, we provide a deep analysis and show that Adam could converge to the stationary point in high probability with a rate of $\mathcal{O}\left({\rm poly}(\log T)/\sqrt{T}\right)$ under coordinate-wise "affine" variance noise, not requiring any bounded gradient assumption and any problem-dependent knowledge in prior to tune hyper-parameters.
no code implementations • 26 Mar 2023 • Marcel Reimann, Jungeun Won, Hiroyuki Takahashi, Antonio Yaghy, Yunchan Hwang, Stefan Ploner, Junhong Lin, Jessica Girgis, Kenneth Lam, Siyu Chen, Nadia K. Waheed, Andreas Maier, James G. Fujimoto
Recent advances in optical coherence tomography such as the development of high speed ultrahigh resolution scanners and corresponding signal processing techniques may reveal new potential biomarkers in retinal diseases.
1 code implementation • 10 Oct 2022 • Junhong Lin, Nanfeng Jiang, Zhentao Zhang, Weiling Chen, Tiesong Zhao
Secondly, we design a Mask Query Transformer (MQFormer) to remove snow with the coarse mask, where we use two parallel encoders and a hybrid decoder to learn extensive snow features under lightweight requirements.
Ranked #1 on Snow Removal on SRRS
no code implementations • 5 Nov 2018 • Junhong Lin, Volkan Cevher
We propose and study kernel conjugate gradient methods (KCGM) with random projections for least-squares regression over a separable Hilbert space.
no code implementations • ICML 2018 • Junhong Lin, Volkan Cevher
We study generalization properties of distributed algorithms in the setting of nonparametric regression over a reproducing kernel Hilbert space (RKHS).
no code implementations • ICML 2018 • Junhong Lin, Volkan Cevher
We investigate regularized algorithms combining with projection for least-squares regression problem over a Hilbert space, covering nonparametric regression over a reproducing kernel Hilbert space.
no code implementations • 22 Jan 2018 • Junhong Lin, Volkan Cevher
We then extend our results to spectral-regularization algorithms (SRA), including kernel ridge regression (KRR), kernel principal component analysis, and gradient methods.
no code implementations • 20 Jan 2018 • Junhong Lin, Alessandro Rudi, Lorenzo Rosasco, Volkan Cevher
In this paper, we study regression problems over a separable Hilbert space with the square loss, covering non-parametric regression over a reproducing kernel Hilbert space.
no code implementations • 21 Oct 2017 • Junhong Lin, Lorenzo Rosasco
In the setting of nonparametric regression, we propose and study a combination of stochastic gradient methods with Nystr\"om subsampling, allowing multiple passes over the data and mini-batches.
no code implementations • 3 Jul 2017 • Junhong Lin, Lorenzo Rosasco
In this paper, we provide an in-depth theoretical analysis for different variants of doubly stochastic learning algorithms within the setting of nonparametric regression in a reproducing kernel Hilbert space and considering the square loss.
no code implementations • NeurIPS 2016 • Junhong Lin, Lorenzo Rosasco
We analyze the learning properties of the stochastic gradient method when multiple passes over the data and mini-batches are allowed.
no code implementations • 28 May 2016 • Junhong Lin, Lorenzo Rosasco
As a byproduct, we derive optimal convergence results for batch gradient methods (even in the non-attainable cases).
1 code implementation • 26 May 2016 • Junhong Lin, Raffaello Camoriano, Lorenzo Rosasco
We study the generalization properties of stochastic gradient methods for learning with convex loss functions and linearly parameterized functions.
no code implementations • 31 Mar 2015 • Junhong Lin, Lorenzo Rosasco, Ding-Xuan Zhou
We consider the problem of supervised learning with convex loss functions and propose a new form of iterative regularization based on the subgradient method.