Search Results for author: Nitin Bansal

Found 7 papers, 2 papers with code

Can We Gain More from Orthogonality Regularizations in Training Deep CNNs?

1 code implementation NeurIPS 2018 Nitin Bansal, Xiaohan Chen, Zhangyang Wang

This paper seeks to answer the question: as the (near-) orthogonality of weights is found to be a favorable property for training deep convolutional neural networks, how can we enforce it in more effective and easy-to-use ways?

Can We Gain More from Orthogonality Regularizations in Training Deep Networks?

1 code implementation NeurIPS 2018 Nitin Bansal, Xiaohan Chen, Zhangyang Wang

This paper seeks to answer the question: as the (near-) orthogonality of weights is found to be a favorable property for training deep convolutional neural networks, how can we enforce it in more effective and easy-to-use ways?

Punjabi to Urdu Machine Translation System

no code implementations ICON 2020 Nitin Bansal, Ajit Kumar

For example, during the development of Punjabi to Urdu MTS, many issues were recognized while preparing lexical resources for both the language.

Machine Translation Sentence +1

PlaneMVS: 3D Plane Reconstruction from Multi-View Stereo

no code implementations CVPR 2022 Jiachen Liu, Pan Ji, Nitin Bansal, Changjiang Cai, Qingan Yan, Xiaolei Huang, Yi Xu

The semantic plane detection branch is based on a single-view plane detection framework but with differences.

3D Reconstruction

FisheyeDistill: Self-Supervised Monocular Depth Estimation with Ordinal Distillation for Fisheye Cameras

no code implementations5 May 2022 Qingan Yan, Pan Ji, Nitin Bansal, Yuxin Ma, Yuan Tian, Yi Xu

In this paper, we deal with the problem of monocular depth estimation for fisheye cameras in a self-supervised manner.

Monocular Depth Estimation

Semantics-Depth-Symbiosis: Deeply Coupled Semi-Supervised Learning of Semantics and Depth

no code implementations21 Jun 2022 Nitin Bansal, Pan Ji, Junsong Yuan, Yi Xu

Multi-task learning (MTL) paradigm focuses on jointly learning two or more tasks, aiming for significant improvement w. r. t model's generalizability, performance, and training/inference memory footprint.

Data Augmentation Depth Estimation +3

CLIP-FLow: Contrastive Learning by semi-supervised Iterative Pseudo labeling for Optical Flow Estimation

no code implementations25 Oct 2022 Zhiqi Zhang, Nitin Bansal, Changjiang Cai, Pan Ji, Qingan Yan, Xiangyu Xu, Yi Xu

To this end, we propose CLIP-FLow, a semi-supervised iterative pseudo-labeling framework to transfer the pretraining knowledge to the target real domain.

Contrastive Learning Optical Flow Estimation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.