Search Results for author: Vijay Badrinarayanan

Found 22 papers, 11 papers with code

Reimagining an autonomous vehicle

no code implementations12 Aug 2021 Jeffrey Hawke, Haibo E, Vijay Badrinarayanan, Alex Kendall

The self driving challenge in 2021 is this century's technological equivalent of the space race, and is now entering the second major decade of development.

Autonomous Driving

MagicEyes: A Large Scale Eye Gaze Estimation Dataset for Mixed Reality

no code implementations18 Mar 2020 Zhengyang Wu, Srivignesh Rajendran, Tarrence van As, Joelle Zimmermann, Vijay Badrinarayanan, Andrew Rabinovich

With the emergence of Virtual and Mixed Reality (XR) devices, eye tracking has received significant attention in the computer vision community.

Eye Tracking Gaze Estimation +1

Scan2Plan: Efficient Floorplan Generation from 3D Scans of Indoor Scenes

no code implementations16 Mar 2020 Ameya Phalak, Vijay Badrinarayanan, Andrew Rabinovich

We introduce Scan2Plan, a novel approach for accurate estimation of a floorplan from a 3D scan of the structural elements of indoor environments.

EyeNet: A Multi-Task Network for Off-Axis Eye Gaze Estimation and User Understanding

no code implementations24 Aug 2019 Zhengyang Wu, Srivignesh Rajendran, Tarrence van As, Joelle Zimmermann, Vijay Badrinarayanan, Andrew Rabinovich

Eye gaze estimation and simultaneous semantic understanding of a user through eye images is a crucial component in Virtual and Mixed Reality; enabling energy efficient rendering, multi-focal displays and effective interaction with 3D content.

Gaze Estimation Mixed Reality

DeepPerimeter: Indoor Boundary Estimation from Posed Monocular Sequences

no code implementations25 Apr 2019 Ameya Phalak, Zhao Chen, Darvin Yi, Khushi Gupta, Vijay Badrinarayanan, Andrew Rabinovich

We present DeepPerimeter, a deep learning based pipeline for inferring a full indoor perimeter (i. e. exterior boundary map) from a sequence of posed RGB images.

Depth Estimation

Gradient Adversarial Training of Neural Networks

no code implementations21 Jun 2018 Ayan Sinha, Zhao Chen, Vijay Badrinarayanan, Andrew Rabinovich

We demonstrate gradient adversarial training for three different scenarios: (1) as a defense to adversarial examples we classify gradient tensors and tune them to be agnostic to the class of their corresponding example, (2) for knowledge distillation, we do binary classification of gradient tensors derived from the student or teacher network and tune the student gradient tensor to mimic the teacher's gradient tensor; and (3) for multi-task learning we classify the gradient tensors derived from different task loss functions and tune them to be statistically indistinguishable.

Knowledge Distillation Multi-Task Learning

Estimating Depth from RGB and Sparse Sensing

2 code implementations ECCV 2018 Zhao Chen, Vijay Badrinarayanan, Gilad Drozdov, Andrew Rabinovich

We present a deep model that can accurately produce dense depth maps given an RGB image with known depth at a very sparse set of pixels.

Monocular Depth Estimation

GradNorm: Gradient Normalization for Adaptive Loss Balancing in Deep Multitask Networks

4 code implementations ICML 2018 Zhao Chen, Vijay Badrinarayanan, Chen-Yu Lee, Andrew Rabinovich

Deep multitask networks, in which one neural network produces multiple predictive outputs, can offer better speed and performance than their single-task counterparts but are challenging to train properly.

Deep Cuboid Detection: Beyond 2D Bounding Boxes

1 code implementation30 Nov 2016 Debidatta Dwibedi, Tomasz Malisiewicz, Vijay Badrinarayanan, Andrew Rabinovich

We present a Deep Cuboid Detector which takes a consumer-quality RGB image of a cluttered scene and localizes all 3D cuboids (box-like objects).

Understanding Real World Indoor Scenes With Synthetic Data

no code implementations CVPR 2016 Ankur Handa, Viorica Patraucean, Vijay Badrinarayanan, Simon Stent, Roberto Cipolla

Scene understanding is a prerequisite to many high level tasks for any automated intelligent machine operating in real world environments.

Scene Understanding

SceneNet: Understanding Real World Indoor Scenes With Synthetic Data

1 code implementation22 Nov 2015 Ankur Handa, Viorica Patraucean, Vijay Badrinarayanan, Simon Stent, Roberto Cipolla

Scene understanding is a prerequisite to many high level tasks for any automated intelligent machine operating in real world environments.

Scene Understanding

TemplateNet for Depth-Based Object Instance Recognition

no code implementations10 Nov 2015 Ujwal Bonde, Vijay Badrinarayanan, Roberto Cipolla, Minh-Tri Pham

We present a novel deep architecture termed templateNet for depth based object instance recognition.

Bayesian SegNet: Model Uncertainty in Deep Convolutional Encoder-Decoder Architectures for Scene Understanding

24 code implementations9 Nov 2015 Alex Kendall, Vijay Badrinarayanan, Roberto Cipolla

Semantic segmentation is an important tool for visual scene understanding and a meaningful measure of uncertainty is essential for decision making.

Decision Making Scene Understanding +1

Symmetry-invariant optimization in deep networks

no code implementations5 Nov 2015 Vijay Badrinarayanan, Bamdev Mishra, Roberto Cipolla

Recent works have highlighted scale invariance or symmetry that is present in the weight space of a typical deep network and the adverse effect that it has on the Euclidean gradient based stochastic gradient descent optimization.

Semantic Segmentation

Understanding symmetries in deep networks

no code implementations3 Nov 2015 Vijay Badrinarayanan, Bamdev Mishra, Roberto Cipolla

Consequently, training the network boils down to using stochastic gradient descent updates on the unit-norm manifold.

SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation

68 code implementations2 Nov 2015 Vijay Badrinarayanan, Alex Kendall, Roberto Cipolla

We show that SegNet provides good performance with competitive inference time and more efficient inference memory-wise as compared to other architectures.

Crowd Counting General Classification +4

Cannot find the paper you are looking for? You can Submit a new open access paper.