Search Results for author: Nitin Bansal

Found 6 papers, 2 papers with code

Punjabi to Urdu Machine Translation System

no code implementations ICON 2020 Nitin Bansal, Ajit Kumar

For example, during the development of Punjabi to Urdu MTS, many issues were recognized while preparing lexical resources for both the language.

Machine Translation Translation

Semantics-Depth-Symbiosis: Deeply Coupled Semi-Supervised Learning of Semantics and Depth

no code implementations21 Jun 2022 Nitin Bansal, Pan Ji, Junsong Yuan, Yi Xu

Multi-task learning (MTL) paradigm focuses on jointly learning two or more tasks, aiming for significant improvement w. r. t model's generalizability, performance, and training/inference memory footprint.

Data Augmentation Depth Estimation +2

FisheyeDistill: Self-Supervised Monocular Depth Estimation with Ordinal Distillation for Fisheye Cameras

no code implementations5 May 2022 Qingan Yan, Pan Ji, Nitin Bansal, Yuxin Ma, Yuan Tian, Yi Xu

In this paper, we deal with the problem of monocular depth estimation for fisheye cameras in a self-supervised manner.

Monocular Depth Estimation

PlaneMVS: 3D Plane Reconstruction from Multi-View Stereo

no code implementations CVPR 2022 Jiachen Liu, Pan Ji, Nitin Bansal, Changjiang Cai, Qingan Yan, Xiaolei Huang, Yi Xu

The semantic plane detection branch is based on a single-view plane detection framework but with differences.

3D Reconstruction

Can We Gain More from Orthogonality Regularizations in Training Deep Networks?

1 code implementation NeurIPS 2018 Nitin Bansal, Xiaohan Chen, Zhangyang Wang

This paper seeks to answer the question: as the (near-) orthogonality of weights is found to be a favorable property for training deep convolutional neural networks, how can we enforce it in more effective and easy-to-use ways?

Can We Gain More from Orthogonality Regularizations in Training Deep CNNs?

1 code implementation NeurIPS 2018 Nitin Bansal, Xiaohan Chen, Zhangyang Wang

This paper seeks to answer the question: as the (near-) orthogonality of weights is found to be a favorable property for training deep convolutional neural networks, how can we enforce it in more effective and easy-to-use ways?

Cannot find the paper you are looking for? You can Submit a new open access paper.