Search Results for author: Shoubhik Debnath

Found 9 papers, 3 papers with code

ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders

10 code implementations CVPR 2023 Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie

This co-design of self-supervised learning techniques and architectural improvement results in a new model family called ConvNeXt V2, which significantly improves the performance of pure ConvNets on various recognition benchmarks, including ImageNet classification, COCO detection, and ADE20K segmentation.

Object Detection Representation Learning +2

Exploring Long-Sequence Masked Autoencoders

1 code implementation13 Oct 2022 Ronghang Hu, Shoubhik Debnath, Saining Xie, Xinlei Chen

Masked Autoencoding (MAE) has emerged as an effective approach for pre-training representations across multiple domains.

Object Detection Segmentation +1

RGB-D Local Implicit Function for Depth Completion of Transparent Objects

1 code implementation CVPR 2021 Luyang Zhu, Arsalan Mousavian, Yu Xiang, Hammad Mazhar, Jozef van Eenbergen, Shoubhik Debnath, Dieter Fox

Key to our approach is a local implicit neural representation built on ray-voxel pairs that allows our method to generalize to unseen objects and achieve fast inference speed.

Depth Completion Depth Estimation +1

Self-Supervised Real-to-Sim Scene Generation

no code implementations ICCV 2021 Aayush Prakash, Shoubhik Debnath, Jean-Francois Lafleche, Eric Cameracci, Gavriel State, Stan Birchfield, Marc T. Law

Synthetic data is emerging as a promising solution to the scalability issue of supervised deep learning, especially when real data are difficult to acquire or hard to annotate.

Graph Generation Scene Generation +3

Accelerating Goal-Directed Reinforcement Learning by Model Characterization

no code implementations4 Jan 2019 Shoubhik Debnath, Gaurav Sukhatme, Lantao Liu

Then, we leverage this approximate model along with a notion of reachability using Mean First Passage Times to perform Model-Based reinforcement learning.

Model-based Reinforcement Learning Q-Learning +2

Reachability and Differential based Heuristics for Solving Markov Decision Processes

no code implementations3 Jan 2019 Shoubhik Debnath, Lantao Liu, Gaurav Sukhatme

The solution convergence of Markov Decision Processes (MDPs) can be accelerated by prioritized sweeping of states ranked by their potential impacts to other states.

Cannot find the paper you are looking for? You can Submit a new open access paper.