no code implementations • 10 Mar 2025 • Sihao Lin, Daqi Liu, Ruochong Fu, Dongrui Liu, Andy Song, Hongwei Xie, Zhihui Li, Bing Wang, Xiaojun Chang
Thus, we adapt the relative depth derived from VFMs into metric depth by optimising the scale and offset using temporal consistency, also known as novel view synthesis, without access to ground-truth metric depth.
no code implementations • 10 Mar 2025 • Weize Li, Yunhao Du, Qixiang Yin, Zhicheng Zhao, Fei Su, Daqi Liu
Referring Multi-Object Tracking (RMOT) aims to localize target trajectories specified by natural language expressions in videos.
no code implementations • 24 Mar 2024 • Dongrui Liu, Daqi Liu, Xueqian Li, Sihao Lin, Hongwei Xie, Bing Wang, Xiaojun Chang, Lei Chu
Neural Scene Flow Prior (NSFP) and Fast Neural Scene Flow (FNSF) have shown remarkable adaptability in the context of large out-of-distribution autonomous driving.
no code implementations • CVPR 2024 • Lizhe Liu, Bohua Wang, Hongwei Xie, Daqi Liu, Li Liu, Zhiqiang Tian, Kuiyuan Yang, Bing Wang
Vision-centric 3D environment understanding is both vital and challenging for autonomous driving systems.
1 code implementation • 27 Sep 2022 • Sofia McLeod, Gabriele Meoni, Dario Izzo, Anne Mergy, Daqi Liu, Yasir Latif, Ian Reid, Tat-Jun Chin
This is achieved by estimating divergence (inverse TTC), which is the rate of radial optic flow, from the event stream generated during landing.
no code implementations • 22 Jun 2022 • Daqi Liu, Miroslaw Bober, Josef Kittler
As a structured prediction task, scene graph generation, given an input image, aims to explicitly model objects and their relationships by constructing a visually-grounded scene graph.
no code implementations • 14 May 2022 • Daqi Liu, Miroslaw Bober, Josef Kittler
Scene graph generation is a structured prediction task aiming to explicitly model objects and their relationships via constructing a visually-grounded scene graph for an input image.
no code implementations • 2 Mar 2022 • Daqi Liu, Alvaro Parra, Yasir Latif, Bo Chen, Tat-Jun Chin, Ian Reid
Event cameras open up new possibilities for robotic perception due to their low latency and high dynamic range.
no code implementations • 27 Jan 2022 • Daqi Liu, Miroslaw Bober, Josef Kittler
As a structured prediction task, scene graph generation aims to build a visually-grounded scene graph to explicitly model objects and their relationships in an input image.
no code implementations • 10 Dec 2021 • Daqi Liu, Miroslaw Bober, Josef Kittler
Scene graph generation aims to interpret an input image by explicitly modelling the potential objects and their relationships, which is predominantly solved by the message passing neural network models in previous methods.
2 code implementations • CVPR 2021 • Daqi Liu, Alvaro Parra, Tat-Jun Chin
The state-of-the-art method of contrast maximisation recovers the motion from a batch of events by maximising the contrast of the image of warped events.
no code implementations • 21 Mar 2020 • Daqi Liu, Bo Chen, Tat-Jun Chin, Mark Rutten
In this paper, we propose a novel multi-target detection technique based on topological sweep, to find GEO objects from a short sequence of optical images.
1 code implementation • CVPR 2020 • Daqi Liu, Álvaro Parra, Tat-Jun Chin
To alleviate this weakness, we propose a new globally optimal event-based motion estimation algorithm.
no code implementations • 13 Mar 2019 • Daqi Liu, Miroslaw Bober, Josef Kittler
Since it helps to enhance the accuracy and the consistency of the resulting interpretation, visual context reasoning is often incorporated with visual perception in current deep end-to-end visual semantic information pursuit methods.