1 code implementation • 23 Feb 2025 • Feng Lu, Tong Jin, Xiangyuan Lan, Lijun Zhang, Yunpeng Liu, YaoWei Wang, Chun Yuan
In our previous work, we propose a novel method to realize seamless adaptation of foundation models to VPR (SelaVPR).
no code implementations • 13 Feb 2025 • Yunpeng Liu, Matthew Niedoba, William Harvey, Adam Scibior, Berend Zwartsenberg, Frank Wood
Recent developments in diffusion-based scenario generation focus on creating diverse and realistic traffic scenarios by jointly modelling the motion of all the agents in the scene.
1 code implementation • 15 Dec 2024 • Chuang Yu, Yunpeng Liu, Jinmiao Zhao, Xiangyu Yue
To the best of our knowledge, this is the first time that hard negative sample mining for metric networks has been implemented and brings significant performance gains.
1 code implementation • 15 Dec 2024 • Chuang Yu, Jinmiao Zhao, Yunpeng Liu, Sicheng Zhao, Xiangyu Yue
Specifically, inspired by organisms gradually adapting to their environment and continuously accumulating knowledge, we propose an innovative progressive active learning idea, which emphasizes that the network progressively and actively recognizes and learns more hard samples to achieve continuous performance enhancement.
no code implementations • 9 Dec 2024 • Yunpeng Liu, Boxiao Liu, Yi Zhang, Xingzhong Hou, Guanglu Song, Yu Liu, Haihang You
Specifically, we regard the distillation process at each timestep as a curriculum and introduce a metric based on Peak Signal-to-Noise Ratio (PSNR) to quantify the learning complexity of this curriculum, then ensure that the curriculum maintains consistent learning complexity across different timesteps by having the teacher model iterate more steps when the noise intensity is low.
no code implementations • 1 Dec 2024 • Tong Jin, Feng Lu, Shuyu Hu, Chun Yuan, Yunpeng Liu
To obtain a global representation for each place image, most approaches typically focus on the aggregation of deep features extracted from a backbone through using current prominent architectures (e. g., CNNs, MLPs, pooling layer and transformer encoder), giving little attention to the transformer decoder.
no code implementations • 6 Sep 2024 • Xiyuan Zhao, Xinhao Deng, Qi Li, Yunpeng Liu, Zhuotao Liu, Kun Sun, Ke Xu
In particular, Oscar combines proxy-based and sample-based metric learning losses to extract webpage features from obfuscated traffic and identify multiple webpages.
no code implementations • 5 Aug 2024 • Chuang Yu, Yunpeng Liu, Jinmiao Zhao, Zelin Shi
Specifically, to ensure the lightweight and robustness, on the one hand, we construct a lightweight feature extraction attention (LFEA) module, which can fully extract target features and strengthen information interaction across channels.
no code implementations • 5 Aug 2024 • Jinmiao Zhao, Zelin Shi, Chuang Yu, Yunpeng Liu
In addition, to further improve the performance and explore the characteristics of this task, on the one hand, we construct and find that a multi-stage loss is helpful for fine-grained detection.
no code implementations • 29 Jul 2024 • Jinmiao Zhao, Zelin Shi, Chuang Yu, Yunpeng Liu
Therefore, we propose a refined infrared small target detection scheme based on an adjustable sensitivity (AS) strategy and multi-scale fusion.
no code implementations • 28 Jul 2024 • Qiao Li, Kanlun Tan, Qiao Liu, Di Yuan, Xin Li, Yunpeng Liu
However, tracking methods trained on RGB datasets suffer a significant drop-off in TIR data due to the domain shift issue.
no code implementations • 4 Jun 2024 • Jinmiao Zhao, Zelin Shi, Chuang Yu, Yunpeng Liu
Specifically, an innovative multi-directional feature awareness (MDFA) module is constructed, which fully utilizes the prior knowledge of targets and emphasizes the focus on high-frequency directional features.
no code implementations • 7 May 2024 • Jonathan Wilder Lavington, Ke Zhang, Vasileios Lioutas, Matthew Niedoba, Yunpeng Liu, Dylan Green, Saeid Naderiparizi, Xiaoxuan Liang, Setareh Dabiri, Adam Ścibior, Berend Zwartsenberg, Frank Wood
Moreover, because of the high variability between different problems presented in different autonomous systems, these simulators need to be easy to use, and easy to modify.
no code implementations • 30 Apr 2024 • Dylan Green, William Harvey, Saeid Naderiparizi, Matthew Niedoba, Yunpeng Liu, Xiaoxuan Liang, Jonathan Lavington, Ke Zhang, Vasileios Lioutas, Setareh Dabiri, Adam Scibior, Berend Zwartsenberg, Frank Wood
Current state-of-the-art methods for video inpainting typically rely on optical flow or attention-based approaches to inpaint masked regions by propagating visual information across frames.
1 code implementation • 18 Mar 2024 • Chuang Yu, Yunpeng Liu, Jinmiao Zhao, Dou Quan, Zelin Shi, Xiangyu Yue
Therefore, we propose an innovative relational representation learning idea that simultaneously focuses on sufficiently mining the intrinsic features of individual image patches and the relations between image patch features.
1 code implementation • 14 Feb 2024 • Jason Yoo, Yunpeng Liu, Frank Wood, Geoff Pleiss
Our solution, Layerwise Proximal Replay (LPR), balances learning from new and replay data while only allowing for gradual changes in the hidden activation of past data.
1 code implementation • 12 Feb 2024 • Matthew Niedoba, Dylan Green, Saeid Naderiparizi, Vasileios Lioutas, Jonathan Wilder Lavington, Xiaoxuan Liang, Yunpeng Liu, Ke Zhang, Setareh Dabiri, Adam Ścibior, Berend Zwartsenberg, Frank Wood
Score function estimation is the cornerstone of both training and sampling from diffusion generative models.
no code implementations • 11 Jan 2024 • Tianyu Cui, Yanling Wang, Chuanpu Fu, Yong Xiao, Sijia Li, Xinhao Deng, Yunpeng Liu, Qinglin Zhang, Ziyi Qiu, Peiyang Li, Zhixing Tan, Junwu Xiong, Xinyu Kong, Zujie Wen, Ke Xu, Qi Li
Based on this, we propose a comprehensive taxonomy, which systematically analyzes potential risks associated with each module of an LLM system and discusses the corresponding mitigation strategies.
no code implementations • 17 Oct 2023 • Jiawang Dan, Ruofan Wu, Yunpeng Liu, Baokun Wang, Changhua Meng, Tengfei Liu, Tianyi Zhang, Ningtao Wang, Xing Fu, Qi Li, Weiqiang Wang
Recently, the idea of designing neural models on graphs using the theory of graph kernels has emerged as a more transparent as well as sometimes more expressive alternative to MPNNs known as kernel graph neural networks (KGNNs).
1 code implementation • 24 May 2023 • Setareh Dabiri, Vasileios Lioutas, Berend Zwartsenberg, Yunpeng Liu, Matthew Niedoba, Xiaoxuan Liang, Dylan Green, Justice Sefas, Jonathan Wilder Lavington, Frank Wood, Adam Scibior
When training object detection models on synthetic data, it is important to make the distribution of synthetic data as close as possible to the distribution of real data.
no code implementations • 19 May 2023 • Yunpeng Liu, Vasileios Lioutas, Jonathan Wilder Lavington, Matthew Niedoba, Justice Sefas, Setareh Dabiri, Dylan Green, Xiaoxuan Liang, Berend Zwartsenberg, Adam Ścibior, Frank Wood
The development of algorithms that learn multi-agent behavioral models using human demonstrations has led to increasingly realistic simulations in the field of autonomous driving.
1 code implementation • 27 Sep 2022 • Haoning Lin, Changhao Sun, Yunpeng Liu
Trying to address these problems, this paper proposes OBBStacking, an ensemble method that is compatible with OBBs and combines the detection results in a learned fashion.
no code implementations • 9 Aug 2022 • Yunpeng Liu, Jonathan Wilder Lavington, Adam Scibior, Frank Wood
We develop a generic mechanism for generating vehicle-type specific sequences of waypoints from a probabilistic foundation model of driving behavior.
no code implementations • 17 Jun 2022 • Berend Zwartsenberg, Adam Ścibior, Matthew Niedoba, Vasileios Lioutas, Yunpeng Liu, Justice Sefas, Setareh Dabiri, Jonathan Wilder Lavington, Trevor Campbell, Frank Wood
We present a novel, conditional generative probabilistic model of set-valued data with a tractable log density.
no code implementations • 30 May 2022 • Vasileios Lioutas, Jonathan Wilder Lavington, Justice Sefas, Matthew Niedoba, Yunpeng Liu, Berend Zwartsenberg, Setareh Dabiri, Frank Wood, Adam Scibior
We introduce CriticSMC, a new algorithm for planning as inference built from a composition of sequential Monte Carlo with learned Soft-Q function heuristic factors.