no code implementations • 27 Jan 2023 • Wonho Bae, Mohamed Osama Ahmed, Frederick Tung, Gabriel L. Oliveira
In this work, we propose to train TPPs in a meta learning framework, where each sequence is treated as a different task, via a novel framing of TPPs as neural processes (NPs).
no code implementations • 29 Sep 2021 • Golara Javadi, Frederick Tung, Gabriel L. Oliveira
Parameter sharing approaches for deep multi-task learning share a common intuition: for a single network to perform multiple prediction tasks, the network needs to support multiple specialized execution paths.
1 code implementation • 20 Jun 2021 • Raquel Aoki, Frederick Tung, Gabriel L. Oliveira
In contrast to single-task learning, in which a separate model is trained for each target, multi-task learning (MTL) optimizes a single model to predict multiple related targets simultaneously.
no code implementations • 19 Jul 2020 • Gabriel L. Oliveira, Senthil Yogamani, Wolfram Burgard, Thomas Brox
In order to further improve the architecture we introduce a weight function which aims to re-balance classes to increase the attention of the networks to under-represented objects.
no code implementations • 27 Jun 2017 • Gabriel L. Oliveira, Noha Radwan, Wolfram Burgard, Thomas Brox
Compared to LiDAR-based localization methods, which provide high accuracy but rely on expensive sensors, visual localization approaches only require a camera and thus are more cost-effective while their accuracy and reliability typically is inferior to LiDAR-based methods.
no code implementations • 26 Jun 2017 • Ayush Dewan, Gabriel L. Oliveira, Wolfram Burgard
To learn the distinction between movable and non-movable points in the environment, we introduce an approach based on deep neural network and for detecting the dynamic points, we estimate pointwise motion.
1 code implementation • ICCV 2017 • Mohammadreza Zolfaghari, Gabriel L. Oliveira, Nima Sedaghat, Thomas Brox
In this paper, we propose a network architecture that computes and integrates the most important visual cues for action recognition: pose, motion, and the raw images.