Search Results for author: Suren Jayasuriya

Found 27 papers, 6 papers with code

PathFinder: Attention-Driven Dynamic Non-Line-of-Sight Tracking with a Mobile Robot

no code implementations7 Apr 2024 Shenbagaraj Kannapiran, Sreenithy Chandran, Suren Jayasuriya, Spring Berman

The study of non-line-of-sight (NLOS) imaging is growing due to its many potential applications, including rescue operations and pedestrian detection by self-driving cars.

Pathfinder Pedestrian Detection +1

Z-Splat: Z-Axis Gaussian Splatting for Camera-Sonar Fusion

no code implementations6 Apr 2024 Ziyuan Qu, Omkar Vengurlekar, Mohamad Qadri, Kevin Zhang, Michael Kaess, Christopher Metzler, Suren Jayasuriya, Adithya Pediredla

In this manuscript, we demonstrate that using transient data (from sonars) allows us to address the missing cone problem by sampling high-frequency data along the depth axis.

Autonomous Navigation Novel View Synthesis

Unsupervised Region-Growing Network for Object Segmentation in Atmospheric Turbulence

no code implementations6 Nov 2023 Dehao Qin, Ripon Saha, Suren Jayasuriya, Jinwei Ye, Nianyi Li

In this paper, we present a two-stage unsupervised foreground object segmentation network tailored for dynamic scenes affected by atmospheric turbulence.

Object Optical Flow Estimation +2

Learning Repeatable Speech Embeddings Using An Intra-class Correlation Regularizer

1 code implementation NeurIPS 2023 Jianwei Zhang, Suren Jayasuriya, Visar Berisha

A good supervised embedding for a specific machine learning task is only sensitive to changes in the label of interest and is invariant to other confounding factors.

Speaker Verification

Neural Volumetric Reconstruction for Coherent Synthetic Aperture Sonar

no code implementations16 Jun 2023 Albert W. Reed, Juhyeon Kim, Thomas Blanford, Adithya Pediredla, Daniel C. Brown, Suren Jayasuriya

However, image formation is typically under-constrained due to a limited number of measurements and bandlimited hardware, which limits the capabilities of existing reconstruction methods.

Image Reconstruction Neural Rendering

Eulerian Phase-based Motion Magnification for High-Fidelity Vital Sign Estimation with Radar in Clinical Settings

no code implementations3 Dec 2022 Md Farhan Tasnim Oshim, Toral Surti, Stephanie Carreiro, Deepak Ganesan, Suren Jayasuriya, Tauhidur Rahman

Efficient and accurate detection of subtle motion generated from small objects in noisy environments, as needed for vital sign monitoring, is challenging, but can be substantially improved with magnification.

Motion Magnification

Towards Live 3D Reconstruction from Wearable Video: An Evaluation of V-SLAM, NeRF, and Videogrammetry Techniques

no code implementations21 Nov 2022 David Ramirez, Suren Jayasuriya, Andreas Spanias

3D reconstruction algorithms should utilize the low cost and pervasiveness of video camera sensors, from both overhead and soldier-level perspectives.

3D Reconstruction Autonomous Driving +1

Robust Vocal Quality Feature Embeddings for Dysphonic Voice Detection

1 code implementation17 Nov 2022 Jianwei Zhang, Julie Liss, Suren Jayasuriya, Visar Berisha

In this paper, we propose a deep learning framework for generating acoustic feature embeddings sensitive to vocal quality and robust across different corpora.

Cross-corpus

SINR: Deconvolving Circular SAS Images Using Implicit Neural Representations

1 code implementation21 Apr 2022 Albert Reed, Thomas Blanford, Daniel C. Brown, Suren Jayasuriya

In theory, deconvolution overcomes bandwidth limitations by reversing the PSF-induced blur and recovering the scene's scattering distribution.

Adaptive Subsampling for ROI-based Visual Tracking: Algorithms and FPGA Implementation

no code implementations17 Dec 2021 Odrika Iqbal, Victor Isaac Torres Muro, Sameeksha Katoch, Andreas Spanias, Suren Jayasuriya

Our adaptive subsampling algorithms comprise an object detector and an ROI predictor (Kalman filter) which operate in conjunction to optimize the energy efficiency of the vision pipeline with the end task being object tracking.

Object Tracking Visual Tracking

Implicit Neural Representations for Deconvolving SAS Images

no code implementations16 Dec 2021 Albert Reed, Thomas Blanford, Daniel C. Brown, Suren Jayasuriya

This work is an important first step towards applying neural networks for SAS image deconvolution.

Image Deconvolution

Deep Camera Obscura: An Image Restoration Pipeline for Lensless Pinhole Photography

no code implementations12 Aug 2021 Joshua D. Rego, Huaijin Chen, Shuai Li, Jinwei Gu, Suren Jayasuriya

The lensless pinhole camera is perhaps the earliest and simplest form of an imaging system using only a pinhole-sized aperture in place of a lens.

Image Restoration

Dynamic CT Reconstruction from Limited Views with Implicit Neural Representations and Parametric Motion Fields

no code implementations ICCV 2021 Albert W. Reed, Hyojin Kim, Rushil Anirudh, K. Aditya Mohan, Kyle Champley, Jingu Kang, Suren Jayasuriya

However, if the scene is moving too fast, then the sampling occurs along a limited view and is difficult to reconstruct due to spatiotemporal ambiguities.

Restoring degraded speech via a modified diffusion model

no code implementations22 Apr 2021 Jianwei Zhang, Suren Jayasuriya, Visar Berisha

We replace the mel-spectrum upsampler in DiffWave with a deep CNN upsampler, which is trained to alter the degraded speech mel-spectrum to match that of the original speech.

Coupling Rendering and Generative Adversarial Networks for Artificial SAS Image Generation

no code implementations13 Sep 2019 Albert Reed, Isaac Gerg, John McKay, Daniel Brown, David Williams, Suren Jayasuriya

Acquisition of Synthetic Aperture Sonar (SAS) datasets is bottlenecked by the costly deployment of SAS imaging systems, and even when data acquisition is possible, the data is often skewed towards containing barren seafloor rather than objects of interest.

Generative Adversarial Network Image Generation

Adaptive Lighting for Data-Driven Non-Line-of-Sight 3D Localization and Object Identification

no code implementations28 May 2019 Sreenithy Chandran, Suren Jayasuriya

We achieve an average identification of 87. 1% object identification for four classes of objects, and average localization of the NLOS object's centroid with a mean-squared error (MSE) of 1. 97 cm in the occluded region for real data taken from a hardware prototype.

Non-Parametric Priors For Generative Adversarial Networks

no code implementations16 May 2019 Rajhans Singh, Pavan Turaga, Suren Jayasuriya, Ravi Garg, Martin W. Braun

The advent of generative adversarial networks (GAN) has enabled new capabilities in synthesis, interpolation, and data augmentation heretofore considered very challenging.

Data Augmentation Image Generation

CS-VQA: Visual Question Answering with Compressively Sensed Images

no code implementations8 Jun 2018 Li-Chi Huang, Kuldeep Kulkarni, Anik Jha, Suhas Lohit, Suren Jayasuriya, Pavan Turaga

Visual Question Answering (VQA) is a complex semantic task requiring both natural language processing and visual recognition.

Question Answering Visual Question Answering

EVA$^2$: Exploiting Temporal Redundancy in Live Computer Vision

no code implementations16 Mar 2018 Mark Buckler, Philip Bedoukian, Suren Jayasuriya, Adrian Sampson

Hardware support for deep convolutional neural networks (CNNs) is critical to advanced computer vision in mobile and embedded devices.

Motion Compensation Motion Estimation +1

Compressive Light Field Reconstructions using Deep Learning

no code implementations5 Feb 2018 Mayank Gupta, Arjun Jauhari, Kuldeep Kulkarni, Suren Jayasuriya, Alyosha Molnar, Pavan Turaga

We test our network reconstructions on synthetic light fields, simulated coded measurements of real light fields captured from a Lytro Illum camera, and real coded images from a custom CMOS diffractive light field camera.

Compressive Sensing

Depth Fields: Extending Light Field Techniques to Time-of-Flight Imaging

no code implementations2 Sep 2015 Suren Jayasuriya, Adithya Pediredla, Sriram Sivaramakrishnan, Alyosha Molnar, Ashok Veeraraghavan

In this paper, we explore the strengths and weaknesses of combining light field and time-of-flight imaging, particularly the feasibility of an on-chip implementation as a single hybrid depth sensor.

Cannot find the paper you are looking for? You can Submit a new open access paper.