no code implementations • 14 Oct 2024 • Md Farhan Tasnim Oshim, Albert Reed, Suren Jayasuriya, Tauhidur Rahman
Inverse Synthetic Aperture Radar (ISAR) imaging presents a formidable challenge when it comes to small everyday objects due to their limited Radar Cross-Section (RCS) and the inherent resolution constraints of radar systems.
1 code implementation • 29 Aug 2024 • Ripon Kumar Saha, Esen Salcin, Jihoo Kim, Joseph Smith, Suren Jayasuriya
In this paper, we present a comparative analysis of classical image gradient methods for $C_n^2$ estimation and modern deep learning-based methods leveraging convolutional neural networks.
no code implementations • CVPR 2024 • Ripon Kumar Saha, Dehao Qin, Nianyi Li, Jinwei Ye, Suren Jayasuriya
Tackling image degradation due to atmospheric turbulence, particularly in dynamic environment, remains a challenge for long-range imaging systems.
no code implementations • 7 Apr 2024 • Shenbagaraj Kannapiran, Sreenithy Chandran, Suren Jayasuriya, Spring Berman
The study of non-line-of-sight (NLOS) imaging is growing due to its many potential applications, including rescue operations and pedestrian detection by self-driving cars.
1 code implementation • 6 Apr 2024 • Ziyuan Qu, Omkar Vengurlekar, Mohamad Qadri, Kevin Zhang, Michael Kaess, Christopher Metzler, Suren Jayasuriya, Adithya Pediredla
In this manuscript, we demonstrate that using transient data (from sonars) allows us to address the missing cone problem by sampling high-frequency data along the depth axis.
no code implementations • 6 Nov 2023 • Dehao Qin, Ripon Saha, Suren Jayasuriya, Jinwei Ye, Nianyi Li
In this paper, we present an unsupervised approach for segmenting moving objects in videos downgraded by atmospheric turbulence.
1 code implementation • NeurIPS 2023 • Jianwei Zhang, Suren Jayasuriya, Visar Berisha
A good supervised embedding for a specific machine learning task is only sensitive to changes in the label of interest and is invariant to other confounding factors.
no code implementations • 16 Jun 2023 • Albert W. Reed, Juhyeon Kim, Thomas Blanford, Adithya Pediredla, Daniel C. Brown, Suren Jayasuriya
However, image formation is typically under-constrained due to a limited number of measurements and bandlimited hardware, which limits the capabilities of existing reconstruction methods.
no code implementations • 3 Dec 2022 • Md Farhan Tasnim Oshim, Toral Surti, Stephanie Carreiro, Deepak Ganesan, Suren Jayasuriya, Tauhidur Rahman
Efficient and accurate detection of subtle motion generated from small objects in noisy environments, as needed for vital sign monitoring, is challenging, but can be substantially improved with magnification.
no code implementations • 21 Nov 2022 • David Ramirez, Suren Jayasuriya, Andreas Spanias
3D reconstruction algorithms should utilize the low cost and pervasiveness of video camera sensors, from both overhead and soldier-level perspectives.
1 code implementation • 17 Nov 2022 • Jianwei Zhang, Julie Liss, Suren Jayasuriya, Visar Berisha
In this paper, we propose a deep learning framework for generating acoustic feature embeddings sensitive to vocal quality and robust across different corpora.
1 code implementation • 21 Apr 2022 • Albert Reed, Thomas Blanford, Daniel C. Brown, Suren Jayasuriya
In theory, deconvolution overcomes bandwidth limitations by reversing the PSF-induced blur and recovering the scene's scattering distribution.
no code implementations • 17 Dec 2021 • Odrika Iqbal, Victor Isaac Torres Muro, Sameeksha Katoch, Andreas Spanias, Suren Jayasuriya
Our adaptive subsampling algorithms comprise an object detector and an ROI predictor (Kalman filter) which operate in conjunction to optimize the energy efficiency of the vision pipeline with the end task being object tracking.
no code implementations • 16 Dec 2021 • Albert Reed, Thomas Blanford, Daniel C. Brown, Suren Jayasuriya
This work is an important first step towards applying neural networks for SAS image deconvolution.
no code implementations • 12 Aug 2021 • Joshua D. Rego, Huaijin Chen, Shuai Li, Jinwei Gu, Suren Jayasuriya
The lensless pinhole camera is perhaps the earliest and simplest form of an imaging system using only a pinhole-sized aperture in place of a lens.
no code implementations • ICCV 2021 • Albert W. Reed, Hyojin Kim, Rushil Anirudh, K. Aditya Mohan, Kyle Champley, Jingu Kang, Suren Jayasuriya
However, if the scene is moving too fast, then the sampling occurs along a limited view and is difficult to reconstruct due to spatiotemporal ambiguities.
no code implementations • 22 Apr 2021 • Jianwei Zhang, Suren Jayasuriya, Visar Berisha
We replace the mel-spectrum upsampler in DiffWave with a deep CNN upsampler, which is trained to alter the degraded speech mel-spectrum to match that of the original speech.
1 code implementation • ICCV 2021 • Nianyi Li, Simron Thapa, Cameron Whyte, Albert W. Reed, Suren Jayasuriya, Jinwei Ye
In this paper, we present a novel unsupervised network to recover the latent distortion-free image.
1 code implementation • ECCV 2020 • John Janiczek, Parth Thaker, Gautam Dasarathy, Christopher S. Edwards, Philip Christensen, Suren Jayasuriya
The dispersion model is introduced to simulate realistic spectral variation, and an efficient method to fit the parameters is presented.
no code implementations • 13 Sep 2019 • Albert Reed, Isaac Gerg, John McKay, Daniel Brown, David Williams, Suren Jayasuriya
Acquisition of Synthetic Aperture Sonar (SAS) datasets is bottlenecked by the costly deployment of SAS imaging systems, and even when data acquisition is possible, the data is often skewed towards containing barren seafloor rather than objects of interest.
no code implementations • 28 May 2019 • Sreenithy Chandran, Suren Jayasuriya
We achieve an average identification of 87. 1% object identification for four classes of objects, and average localization of the NLOS object's centroid with a mean-squared error (MSE) of 1. 97 cm in the occluded region for real data taken from a hardware prototype.
no code implementations • 16 May 2019 • Rajhans Singh, Pavan Turaga, Suren Jayasuriya, Ravi Garg, Martin W. Braun
The advent of generative adversarial networks (GAN) has enabled new capabilities in synthesis, interpolation, and data augmentation heretofore considered very challenging.
no code implementations • 8 Jun 2018 • Li-Chi Huang, Kuldeep Kulkarni, Anik Jha, Suhas Lohit, Suren Jayasuriya, Pavan Turaga
Visual Question Answering (VQA) is a complex semantic task requiring both natural language processing and visual recognition.
no code implementations • 16 Mar 2018 • Mark Buckler, Philip Bedoukian, Suren Jayasuriya, Adrian Sampson
Hardware support for deep convolutional neural networks (CNNs) is critical to advanced computer vision in mobile and embedded devices.
no code implementations • 5 Feb 2018 • Mayank Gupta, Arjun Jauhari, Kuldeep Kulkarni, Suren Jayasuriya, Alyosha Molnar, Pavan Turaga
We test our network reconstructions on synthetic light fields, simulated coded measurements of real light fields captured from a Lytro Illum camera, and real coded images from a custom CMOS diffractive light field camera.
1 code implementation • ICCV 2017 • Mark Buckler, Suren Jayasuriya, Adrian Sampson
We propose a new image sensor design that can compensate for skipping these stages.
no code implementations • 3 Dec 2016 • Suren Jayasuriya, Orazio Gallo, Jinwei Gu, Jan Kautz
Power consumption is a critical factor for the deployment of embedded computer vision systems.
no code implementations • CVPR 2016 • Huaijin Chen, Suren Jayasuriya, Jiyue Yang, Judy Stephen, Sriram Sivaramakrishnan, Ashok Veeraraghavan, Alyosha Molnar
Deep learning using convolutional neural networks (CNNs) is quickly becoming the state-of-the-art for challenging computer vision applications.
no code implementations • 2 Sep 2015 • Suren Jayasuriya, Adithya Pediredla, Sriram Sivaramakrishnan, Alyosha Molnar, Ashok Veeraraghavan
In this paper, we explore the strengths and weaknesses of combining light field and time-of-flight imaging, particularly the feasibility of an on-chip implementation as a single hybrid depth sensor.
no code implementations • 5 Mar 2015 • Achuta Kadambi, Vage Taamazyan, Suren Jayasuriya, Ramesh Raskar
Time of flight cameras may emerge as the 3-D sensor of choice.