no code implementations • 16 Apr 2025 • Ayca Takmaz, Cristiano Saltori, Neehar Peri, Tim Meinhardt, Riccardo de Lutio, Laura Leal-Taixé, Aljoša Ošep
However, contemporary methods can only complete and recognize objects from a closed vocabulary labeled in existing Lidar datasets.
no code implementations • CVPR 2025 • Yushan Zhang, Aljoša Ošep, Laura Leal-Taixé, Tim Meinhardt
Zero-shot 4D segmentation and recognition of arbitrary objects in Lidar is crucial for embodied navigation, with applications ranging from streaming perception to semantic mapping and localization.
no code implementations • 1 Dec 2024 • Yizhou Wang, Tim Meinhardt, Orcun Cetintas, Cheng-Yen Yang, Sameer Satish Pusegaonkar, Benjamin Missaoui, Sujit Biswas, Zheng Tang, Laura Leal-Taixé
Object perception from multi-view cameras is crucial for intelligent systems, particularly in indoor environments, e. g., warehouses, retail stores, and hospitals.
Ranked #1 on
Multi-Object Tracking
on Wildtrack
(using extra training data)
no code implementations • 17 Apr 2024 • Orcun Cetintas, Tim Meinhardt, Guillem Brasó, Laura Leal-Taixé
Increasing the annotation efficiency of trajectory annotations from videos has the potential to enable the next generation of data-hungry tracking algorithms to thrive on large-scale datasets.
1 code implementation • 19 Mar 2024 • Aljoša Ošep, Tim Meinhardt, Francesco Ferroni, Neehar Peri, Deva Ramanan, Laura Leal-Taixé
We propose the SAL (Segment Anything in Lidar) method consisting of a text-promptable zero-shot model for segmenting and classifying any object in Lidar, and a pseudo-labeling engine that facilitates model training without manual supervision.
no code implementations • 29 Aug 2023 • Tim Meinhardt, Matt Feiszli, Yuchen Fan, Laura Leal-Taixe, Rakesh Ranjan
Until recently, the Video Instance Segmentation (VIS) community operated under the common belief that offline methods are generally superior to a frame by frame online processing.
Ranked #10 on
Video Instance Segmentation
on YouTube-VIS 2021
(using extra training data)
no code implementations • 20 Jun 2023 • Maxim Maximov, Tim Meinhardt, Ismail Elezi, Zoe Papakipos, Caner Hazirbas, Cristian Canton Ferrer, Laura Leal-Taixé
To highlight the importance of privacy issues and motivate future research, we motivate and introduce the Pedestrian Dataset De-Identification (PDI) task.
1 code implementation • 22 Jul 2022 • Adrià Caelles, Tim Meinhardt, Guillem Brasó, Laura Leal-Taixé
To reason about all VIS subtasks jointly over multiple frames, we present temporal multi-scale deformable attention with instance-aware object queries.
2 code implementations • CVPR 2022 • Tim Meinhardt, Alexander Kirillov, Laura Leal-Taixe, Christoph Feichtenhofer
The challenging task of multi-object tracking (MOT) requires simultaneous reasoning about track initialization, identity, and spatio-temporal trajectories.
Ranked #1 on
Multi-Object Tracking
on MOT17
(e2e-MOT metric)
4 code implementations • NeurIPS 2020 • Tim Meinhardt, Laura Leal-Taixe
In the semi-supervised setting, the first mask of each object is provided at test time.
Ranked #43 on
Semi-Supervised Video Object Segmentation
on YouTube-VOS 2018
(using extra training data)
13 code implementations • ICCV 2019 • Philipp Bergmann, Tim Meinhardt, Laura Leal-Taixe
Therefore, we motivate our approach as a new tracking paradigm and point out promising future research directions.
Ranked #1 on
Online Multi-Object Tracking
on 2D MOT 2015
1 code implementation • ECCV 2018 • Peter Ochs, Tim Meinhardt, Laura Leal-Taixe, Michael Moeller
A lifting layer increases the dimensionality of the input, naturally yields a linear spline when combined with a fully connected layer, and therefore closes the gap between low and high dimensional approximation problems.
1 code implementation • ICCV 2017 • Tim Meinhardt, Michael Moeller, Caner Hazirbas, Daniel Cremers
While variational methods have been among the most powerful tools for solving linear inverse problems in imaging, deep (convolutional) neural networks have recently taken the lead in many challenging benchmarks.