Search Results for author: Gabriel L. Oliveira

Found 8 papers, 3 papers with code

Heterogeneous Multi-task Learning with Expert Diversity

1 code implementation20 Jun 2021 Raquel Aoki, Frederick Tung, Gabriel L. Oliveira

In contrast to single-task learning, in which a separate model is trained for each target, multi-task learning (MTL) optimizes a single model to predict multiple related targets simultaneously.

Multi-Task Learning

Topometric Localization with Deep Learning

no code implementations27 Jun 2017 Gabriel L. Oliveira, Noha Radwan, Wolfram Burgard, Thomas Brox

Compared to LiDAR-based localization methods, which provide high accuracy but rely on expensive sensors, visual localization approaches only require a camera and thus are more cost-effective while their accuracy and reliability typically is inferior to LiDAR-based methods.

Visual Localization Visual Odometry

Deep Semantic Classification for 3D LiDAR Data

no code implementations26 Jun 2017 Ayush Dewan, Gabriel L. Oliveira, Wolfram Burgard

To learn the distinction between movable and non-movable points in the environment, we introduce an approach based on deep neural network and for detecting the dynamic points, we estimate pointwise motion.

Classification General Classification

Beyond Single Stage Encoder-Decoder Networks: Deep Decoders for Semantic Image Segmentation

no code implementations19 Jul 2020 Gabriel L. Oliveira, Senthil Yogamani, Wolfram Burgard, Thomas Brox

In order to further improve the architecture we introduce a weight function which aims to re-balance classes to increase the attention of the networks to under-represented objects.

Image Segmentation Optical Flow Estimation +2

DYNASHARE: DYNAMIC NEURAL NETWORKS FOR MULTI-TASK LEARNING

1 code implementation29 Sep 2021 Golara Javadi, Frederick Tung, Gabriel L. Oliveira

Parameter sharing approaches for deep multi-task learning share a common intuition: for a single network to perform multiple prediction tasks, the network needs to support multiple specialized execution paths.

Multi-Task Learning

Meta Temporal Point Processes

no code implementations27 Jan 2023 Wonho Bae, Mohamed Osama Ahmed, Frederick Tung, Gabriel L. Oliveira

In this work, we propose to train TPPs in a meta learning framework, where each sequence is treated as a different task, via a novel framing of TPPs as neural processes (NPs).

Meta-Learning Point Processes

AdaFlood: Adaptive Flood Regularization

no code implementations6 Nov 2023 Wonho Bae, Yi Ren, Mohamad Osama Ahmed, Frederick Tung, Danica J. Sutherland, Gabriel L. Oliveira

Although neural networks are conventionally optimized towards zero training loss, it has been recently learned that targeting a non-zero training loss threshold, referred to as a flood level, often enables better test time generalization.

Cannot find the paper you are looking for? You can Submit a new open access paper.