1 code implementation • 12 Mar 2024 • Peiyuan Liu, Hang Guo, Tao Dai, Naiqi Li, Jigang Bao, Xudong Ren, Yong Jiang, Shu-Tao Xia
Recently, with the surge of the Large Language Models (LLMs), several works have attempted to introduce LLMs into time series forecasting.
Knowledge Distillation Multivariate Time Series Forecasting +2
1 code implementation • 23 Feb 2024 • Hang Guo, Jinmin Li, Tao Dai, Zhihao Ouyang, Xudong Ren, Shu-Tao Xia
In this way, our MambaIR takes advantage of the local pixel similarity and reduces the channel redundancy.
1 code implementation • 12 Dec 2023 • Hang Guo, Tao Dai, Yuanchao Bai, Bin Chen, Shu-Tao Xia, Zexuan Zhu
Recently, Parameter Efficient Transfer Learning (PETL) offers an efficient alternative solution to full fine-tuning, yet still faces great challenges for pre-trained image restoration models, due to the diversity of different degradations.
1 code implementation • 5 Aug 2023 • Hang Guo, Tao Dai, Mingyan Zhu, Guanghao Meng, Bin Chen, Zhi Wang, Shu-Tao Xia
Current solutions for low-resolution text recognition (LTR) typically rely on a two-stage pipeline that involves super-resolution as the first stage followed by the second-stage recognition.
1 code implementation • 19 Jul 2023 • Hang Guo, Tao Dai, Guanghao Meng, Shu-Tao Xia
Scene text image super-resolution (STISR), aiming to improve image quality while boosting downstream scene text recognition accuracy, has recently achieved great success.
1 code implementation • 22 Sep 2022 • Hang Guo, Zhengxi Hu, Jingtai Liu
People's looking at each other or mutual gaze is ubiquitous in our daily interactions, and detecting mutual gaze is of great significance for understanding human social scenes.
no code implementations • 29 Sep 2021 • Lu Chen, Renjie Chen, Hang Guo, Yuan Luo, Quanshi Zhang, Yisen Wang
Adversarial examples have attracted significant attention over the years, yet a sufficient understanding is in lack, especially when analyzing their performances in combination with adversarial training.
no code implementations • 11 Dec 2019 • Hang Guo, Xun Fan, Anh Cao, Geoff Outhred, John Heidemann
We show that our models detect nearly all malicious flows for 2 of the 4 cloud IPs under attack (at least 99. 99%) and detect most malicious flows (94. 75% and 91. 37%) for the remaining 2 IPs.