no code implementations • CVPR 2024 • Ben Agro, Quinlan Sykora, Sergio Casas, Thomas Gilles, Raquel Urtasun
Perceiving the world and forecasting its future state is a critical task for self-driving.
no code implementations • 6 Jun 2024 • Sergio Casas, Ben Agro, Jiageng Mao, Thomas Gilles, Alexander Cui, Thomas Li, Raquel Urtasun
The tasks of object detection and trajectory forecasting play a crucial role in understanding the scene for autonomous driving.
1 code implementation • 16 Sep 2023 • Yi Yang, Qingwen Zhang, Thomas Gilles, Nazre Batool, John Folkesson
As the pretraining technique is growing in popularity, little work has been done on pretrained learning-based motion prediction methods in autonomous driving.
no code implementations • 14 May 2023 • Yunong Wu, Thomas Gilles, Bogdan Stanciulescu, Fabien Moutarde
Meanwhile, we propose a Hierarchical Lane Transformer for capturing interactions between agents and road network, which filters the surrounding road network and only keeps the most probable lane segments which could have an impact on the future behavior of the target agent.
no code implementations • 10 Oct 2022 • Caio Azevedo, Thomas Gilles, Stefano Sabatini, Dzmitry Tsishkou
Inspired by recent developments regarding the application of self-supervised learning (SSL), we devise an auxiliary task for trajectory prediction that takes advantage of map-only information such as graph connectivity with the intent of improving map comprehension and generalization.
no code implementations • 15 May 2022 • Thomas Gilles, Stefano Sabatini, Dzmitry Tsishkou, Bogdan Stanciulescu, Fabien Moutarde
While a lot of work has been carried on developing trajectory prediction methods, and various datasets have been proposed for benchmarking this task, little study has been done so far on the generalizability and the transferability of these methods across dataset.
no code implementations • 5 May 2022 • Arthur Moreau, Thomas Gilles, Nathan Piasco, Dzmitry Tsishkou, Bogdan Stanciulescu, Arnaud de La Fortelle
We propose a novel learning-based formulation for visual localization of vehicles that can operate in real-time in city-scale environments.
no code implementations • 4 Feb 2022 • Nelson Fernandez Pinto, Thomas Gilles
The explanability study suggests that the benefits obtained are associated with a higher relevance of non-drivable areas in the agent's decisions compared to classical behavioral cloning.
no code implementations • 7 Nov 2021 • Ismail Oussaid, William Vanhuffel, Pirashanth Ratnamogan, Mhamed Hajaiej, Alexis Mathey, Thomas Gilles
Information extraction (IE) from documents is an intensive area of research with a large set of industrial applications.
no code implementations • ICLR 2022 • Thomas Gilles, Stefano Sabatini, Dzmitry Tsishkou, Bogdan Stanciulescu, Fabien Moutarde
In this paper, we propose THOMAS, a joint multi-agent trajectory prediction framework allowing for an efficient and consistent prediction of multi-agent multi-modal trajectories.
Ranked #8 on
Trajectory Prediction
on nuScenes
no code implementations • 4 Sep 2021 • Thomas Gilles, Stefano Sabatini, Dzmitry Tsishkou, Bogdan Stanciulescu, Fabien Moutarde
In this paper, we propose GOHOME, a method leveraging graph representations of the High Definition Map and sparse projections to generate a heatmap output representing the future position probability distribution for a given agent in a traffic scene.
Ranked #1 on
Trajectory Prediction
on INTERACTION Dataset - Validation
(minFDE6 metric)
1 code implementation • 23 May 2021 • Thomas Gilles, Stefano Sabatini, Dzmitry Tsishkou, Bogdan Stanciulescu, Fabien Moutarde
In this paper, we propose HOME, a framework tackling the motion forecasting problem with an image output representing the probability distribution of the agent's future location.
Ranked #32 on
Motion Forecasting
on Argoverse CVPR 2020
no code implementations • 8 Oct 2019 • Jean Mercat, Thomas Gilles, Nicole El Zoghby, Guillaume Sandou, Dominique Beauvois, Guillermo Pita Gil
This paper presents a novel vehicle motion forecasting method based on multi-head attention.