Search Results for author: Shubhra Aich

Found 13 papers, 6 papers with code

Deep Bayesian Future Fusion for Self-Supervised, High-Resolution, Off-Road Mapping

no code implementations18 Mar 2024 Shubhra Aich, Wenshan Wang, Parv Maheshwari, Matthew Sivaprakasam, Samuel Triest, Cherie Ho, Jason M. Gregory, John G. Rogers III, Sebastian Scherer

The limited sensing resolution of resource-constrained off-road vehicles poses significant challenges towards reliable off-road autonomy.

Using Large Text-to-Image Models with Structured Prompts for Skin Disease Identification: A Case Study

no code implementations17 Jan 2023 Sajith Rajapaksa, Jean Marie Uwabeza Vianney, Renell Castro, Farzad Khalvati, Shubhra Aich

This paper investigates the potential usage of large text-to-image (LTI) models for the automated diagnosis of a few skin conditions with rarity or a serious lack of annotated datasets.

Data-Free Class-Incremental Hand Gesture Recognition

1 code implementation ICCV 2023 Shubhra Aich, Jesus Ruiz-Santaquiteria, Zhenyu Lu, Prachi Garg, K J Joseph, Alvaro Fernandez Garcia, Vineeth N Balasubramanian, Kenrick Kin, Chengde Wan, Necati Cihan Camgoz, Shugao Ma, Fernando de la Torre

Our sampling scheme outperforms SOTA methods significantly on two 3D skeleton gesture datasets, the publicly available SHREC 2017, and EgoGesture3D -- which we extract from a publicly available RGBD dataset.

Class Incremental Learning Hand Gesture Recognition +3

Bidirectional Attention Network for Monocular Depth Estimation

1 code implementation1 Sep 2020 Shubhra Aich, Jean Marie Uwabeza Vianney, Md Amirul Islam, Mannat Kaur, Bingbing Liu

In this paper, we propose a Bidirectional Attention Network (BANet), an end-to-end framework for monocular depth estimation (MDE) that addresses the limitation of effectively integrating local and global information in convolutional neural networks.

Machine Translation Monocular Depth Estimation +1

RefinedMPL: Refined Monocular PseudoLiDAR for 3D Object Detection in Autonomous Driving

no code implementations21 Nov 2019 Jean Marie Uwabeza Vianney, Shubhra Aich, Bingbing Liu

In this paper, we strive for solving the ambiguities arisen by the astoundingly high density of raw PseudoLiDAR for monocular 3D object detection for autonomous driving.

Autonomous Driving Monocular 3D Object Detection +2

Global Sum Pooling: A Generalization Trick for Object Counting with Small Datasets of Large Images

no code implementations28 May 2018 Shubhra Aich, Ian Stavness

This generalization capability allows GSP to avoid both patchwise cancellation and overfitting by training on small patches and inference on full-resolution images as a whole.

Crowd Counting Object Counting

Semantic Binary Segmentation using Convolutional Networks without Decoders

1 code implementation1 May 2018 Shubhra Aich, William van der Kamp, Ian Stavness

In this paper, we propose an efficient architecture for semantic image segmentation using the depth-to-space (D2S) operation.

Image Segmentation Road Segmentation +1

Improving Object Counting with Heatmap Regulation

2 code implementations14 Mar 2018 Shubhra Aich, Ian Stavness

Adding HR to a simple VGG front-end improves performance on all these benchmarks compared to a simple one-look baseline model and results in state-of-the-art performance for car counting.

Crowd Counting Object +2

DeepWheat: Estimating Phenotypic Traits from Crop Images with Deep Learning

1 code implementation30 Sep 2017 Shubhra Aich, Anique Josuttes, Ilya Ovsyannikov, Keegan Strueby, Imran Ahmed, Hema Sudhakar Duddu, Curtis Pozniak, Steve Shirtliffe, Ian Stavness

In this paper, we investigate estimating emergence and biomass traits from color images and elevation maps of wheat field plots.

Leaf Counting with Deep Convolutional and Deconvolutional Networks

1 code implementation24 Aug 2017 Shubhra Aich, Ian Stavness

In this paper, we investigate the problem of counting rosette leaves from an RGB image, an important task in plant phenotyping.

Data Augmentation Plant Phenotyping

Cannot find the paper you are looking for? You can Submit a new open access paper.