1 code implementation • 21 Sep 2024 • Haoran Zhang, Shuanghao Bai, Wanqi Zhou, Jingwen Fu, Badong Chen
In this work, we propose Prompt-Driven Text Adapter (PromptTA) method, which is designed to better capture the distribution of style features and employ resampling to ensure thorough coverage of domain knowledge.
1 code implementation • 18 Jul 2024 • YuHan Liu, Qianxin Huang, Siqi Hui, Jingwen Fu, Sanping Zhou, Kangyi Wu, Pengna Li, Jinjun Wang
In our work, we seek another way to use the semantic information, that is semantic-aware feature representation learning framework. Based on this, we propose SRMatcher, a new detector-free feature matching method, which encourages the network to learn integrated semantic feature representation. Specifically, to capture precise and rich semantics, we leverage the capabilities of recently popularized vision foundation models (VFMs) trained on extensive datasets.
no code implementations • 20 May 2024 • Jingwen Fu, Zhizheng Zhang, Yan Lu, Nanning Zheng
Compositional Generalization (CG) embodies the ability to comprehend novel combinations of familiar concepts, representing a significant cognitive leap in human intellectual advancement.
no code implementations • 27 Oct 2023 • Bohan Wang, Jingwen Fu, Huishuai Zhang, Nanning Zheng, Wei Chen
Recently, Arjevani et al. [1] established a lower bound of iteration complexity for the first-order optimization under an $L$-smooth condition and a bounded noise variance assumption.
no code implementations • 12 Sep 2023 • Jingwen Fu, Tao Yang, Yuwang Wang, Yan Lu, Nanning Zheng
To study the mechanism behind the learning plateaus, we conceptually seperate a component within the model's internal representation that is exclusively affected by the model's weights.
no code implementations • 10 Sep 2023 • Jingwen Fu, Nanning Zheng
This paper explores the generalization characteristics of iterative learning algorithms with bounded updates for non-convex loss functions, employing information-theoretic techniques.
no code implementations • 15 Jun 2023 • Jingwen Fu, Bohan Wang, Huishuai Zhang, Zhizheng Zhang, Wei Chen, Nanning Zheng
In the comparison of SGDM and SGD with the same effective learning rate and the same batch size, we observe a consistent pattern: when $\eta_{ef}$ is small, SGDM and SGD experience almost the same empirical training losses; when $\eta_{ef}$ surpasses a certain threshold, SGDM begins to perform better than SGD.
no code implementations • CVPR 2023 • Yanqing Shen, Sanping Zhou, Jingwen Fu, Ruotong Wang, Shitao Chen, Nanning Zheng
In this paper, we propose StructVPR, a novel training architecture for VPR, to enhance structural knowledge in RGB global features and thus improve feature stability in a constantly changing environment.
no code implementations • 25 May 2021 • Jingwen Fu, Xiaoyi Zhang, Yuwang Wang, Wenjun Zeng, Sam Yang, Grayson Hilliard
A dataset, RICO-PW, of screenshots with Pixel-Words annotations is built based on the public RICO dataset, which will be released to help to address the lack of high-quality training data in this area.
no code implementations • 7 Nov 2019 • Jingwen Fu, Licheng Zong, Yinbing Li, Ke Li, Bingqian Yang, Xibei Liu
Object detection for robot guidance is a crucial mission for autonomous robots, which has provoked extensive attention for researchers.
no code implementations • 7 Sep 2019 • Jingwen Fu, Xiaoyan Zhu, Yingbin Li
Automatic defect recognition is one of the research hotspots in steel production, but most of the current methods mainly extract features manually and use machine learning classifiers to recognize defects, which cannot tackle the situation, where there are few data available to train and confine to a certain scene.