1 code implementation • 16 Apr 2024 • Jiapeng Su, Qi Fan, Guangming Lu, Fanglin Chen, Wenjie Pei
Instead, our key idea is to adapt a small adapter for rectifying diverse target domain styles to the source domain.
no code implementations • 20 Mar 2024 • Xinyu Geng, JiaMing Wang, Jiawei Gong, Yuerong Xue, Jun Xu, Fanglin Chen, Xiaolin Huang
Redundancy is a persistent challenge in Capsule Networks (CapsNet), leading to high computational costs and parameter counts.
1 code implementation • 16 Dec 2023 • Wenjie Pei, Tongqi Xia, Fanglin Chen, Jinsong Li, Jiandong Tian, Guangming Lu
Typical methods for visual prompt tuning follow the sequential modeling paradigm stemming from NLP, which represents an input image as a flattened sequence of token embeddings and then learns a set of unordered parameterized tokens prefixed to the sequence representation as the visual prompts for task adaptation of large vision models.
1 code implementation • Neural Computing and Applications 2023 • Le Zhang, Yao Lu, Jinxing Li, Fanglin Chen, Guangming Lu, David Zhang
Image hiding secures information security in multimedia communication.
1 code implementation • 19 Dec 2022 • Feng Lin, Wenze Hu, YaoWei Wang, Yonghong Tian, Guangming Lu, Fanglin Chen, Yong Xu, Xiaoyu Wang
In this study, our focus is on a specific challenge: the large-scale, multi-domain universal object detection problem, which contributes to the broader goal of achieving a universal vision system.
no code implementations • 27 Nov 2022 • Jiatong Zhang, Zengwei Yao, Fanglin Chen, Guangming Lu, Wenjie Pei
Second, instead of only performing local self-attention within local windows as Swin Transformer does, the proposed SALG performs both 1) local intra-region self-attention for learning fine-grained features within each region and 2) global inter-region feature propagation for modeling global dependencies among all regions.
Ranked #858 on Image Classification on <h2>oi</h2>
no code implementations • 25 Jul 2022 • Wenjie Pei, Shuang Wu, Dianwen Mei, Fanglin Chen, Jiandong Tian, Guangming Lu
In this work we design a novel knowledge distillation framework to guide the learning of the object detector and thereby restrain the overfitting in both the pre-training stage on base classes and fine-tuning stage on novel classes.
no code implementations • 25 Jul 2022 • Fengjun Li, Xin Feng, Fanglin Chen, Guangming Lu, Wenjie Pei
The real-world degradations can be beyond the simulation scope by the handcrafted degradations, which are referred to as novel degradations.
1 code implementation • 22 Jul 2022 • Shuang Wu, Wenjie Pei, Dianwen Mei, Fanglin Chen, Jiandong Tian, Guangming Lu
Most of existing methods for few-shot object detection follow the fine-tuning paradigm, which potentially assumes that the class-agnostic generalizable knowledge can be learned and transferred implicitly from base classes with abundant samples to novel classes with limited samples via such a two-stage training strategy.
1 code implementation • 16 Jul 2022 • Xin Feng, Haobo Ji, Wenjie Pei, Fanglin Chen, Guangming Lu
While the research on image background restoration from regular size of degraded images has achieved remarkable progress, restoring ultra high-resolution (e. g., 4K) images remains an extremely challenging task due to the explosion of computational complexity and memory usage, as well as the deficiency of annotated data.
no code implementations • 16 Jul 2022 • Fanglin Chen, Xiao Liu, Bo Tang, Feiyu Xiong, Serim Hwang, Guomian Zhuang
During deployment, we combine the offline RL model with the LP model to generate a robust policy under the budget constraints.
no code implementations • 25 Apr 2022 • Lai Wei, Qinyang Li, Yuqi Song, Stanislav Stefanov, Edirisuriya M. D. Siriwardane, Fanglin Chen, Jianjun Hu
Here we propose BLMM Crystal Transformer, a neural network based probabilistic generative model for generative and tinkering design of inorganic materials.
no code implementations • 17 Nov 2021 • Zebin Lin, Wenjie Pei, Fanglin Chen, David Zhang, Guangming Lu
Instead of learning each of these diverse pedestrian appearance features individually as most existing methods do, we propose to perform contrastive learning to guide the feature learning in such a way that the semantic distance between pedestrians with different appearances in the learned feature space is minimized to eliminate the appearance diversities, whilst the distance between pedestrians and background is maximized.
Ranked #1 on Pedestrian Detection on TJU-Ped-campus
no code implementations • 10 Oct 2021 • Zengwei Yao, Wenjie Pei, Fanglin Chen, Guangming Lu, David Zhang
Existing methods for speech separation either transform the speech signals into frequency domain to perform separation or seek to learn a separable embedding space by constructing a latent domain based on convolutional filters.
Ranked #7 on Speech Separation on WHAMR!
no code implementations • 1 Oct 2021 • Xin Feng, Wenjie Pei, Fengjun Li, Fanglin Chen, David Zhang, Guangming Lu
Most existing methods for image inpainting focus on learning the intra-image priors from the known regions of the current input image to infer the content of the corrupted regions in the same image.
1 code implementation • 9 Oct 2020 • Xin Feng, Wenjie Pei, Zihui Jia, Fanglin Chen, David Zhang, Guangming Lu
In this work we present the Deep-Masking Generative Network (DMGN), which is a unified framework for background restoration from the superimposed images and is able to cope with different types of noise.
no code implementations • 21 May 2020 • Fanglin Chen, Xiao Liu, Davide Proserpio, Isamar Troncoso, Feiyu Xiong
We show that, compared with state-of-the-art models, our approach is faster, and can produce more accurate demand forecasts and price elasticities.