no code implementations • 11 Oct 2022 • Nicholas Meegan, Hansi Liu, Bryan Cao, Abrar Alali, Kristin Dana, Marco Gruteser, Shubham Jain, Ashwin Ashok
We introduce ViFiCon, a self-supervised contrastive learning scheme which uses synchronized information across vision and wireless modalities to perform cross-modal association.
1 code implementation • IEEE International Conference on Sensing, Communication, and Networking 2022 • Bryan Bo Cao, Abrar Alali, Hansi Liu, Nicholas Meegan, Marco Gruteser, Kristin Dana, Ashwin Ashok, Shubham Jain
ViTag associates a sequence of vision tracker generated bounding boxes with Inertial Measurement Unit (IMU) data and Wi-Fi Fine Time Measurements (FTM) from smartphones.
1 code implementation • ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN) 2022 • Hansi Liu, Abrar Alali, Mohamed Ibrahim, Bryan Bo Cao, Nicholas Meegan, Hongyu Li, Marco Gruteser, Shubham Jain, Kristin Dana, Ashwin Ashok, Bin Cheng, HongSheng Lu
In this paper, we present Vi-Fi, a multi-modal system that leverages a user’s smartphone WiFi Fine Timing Measurements (FTM) and inertial measurement unit (IMU) sensor data to associate the user detected on a camera footage with their corresponding smartphone identifier (e. g. WiFi MAC address).
1 code implementation • 23 Feb 2022 • Moinak Bhattacharya, Shubham Jain, Prateek Prasanna
RadioTransformer fills this critical gap by learning from radiologists' visual search patterns, encoded as 'human visual attention regions' in a cascaded global-focal transformer framework.
no code implementations • 19 Jan 2022 • Junpeng Wang, Liang Wang, Yan Zheng, Chin-Chia Michael Yeh, Shubham Jain, Wei zhang
With these metrics, one can easily identify meta-features with the most complementary behaviors in two classifiers, and use them to better ensemble the classifiers.
no code implementations • 25 Nov 2020 • Reena Elangovan, Shubham Jain, Anand Raghunathan
To efficiently support precision re-configurability in DNN accelerators, we introduce an approximate computing method wherein DNN computations are performed block-wise (a block is a group of bits) and re-configurability is supported at the granularity of blocks.
no code implementations • 25 Feb 2020 • Sourjya Roy, Shrihari Sridharan, Shubham Jain, Anand Raghunathan
To address this challenge, there is a need for tools that can model the functional impact of non-idealities on DNN training and inference.
no code implementations • 15 Sep 2019 • Shubham Jain, Sumeet Kumar Gupta, Anand Raghunathan
The use of lower precision has emerged as a popular technique to optimize the compute and storage requirements of complex Deep Neural Networks (DNNs).
no code implementations • 31 Aug 2018 • Shubham Jain, Abhronil Sengupta, Kaushik Roy, Anand Raghunathan
We present RxNN, a fast and accurate simulation framework to evaluate large-scale DNNs on resistive crossbar systems.
no code implementations • 7 Nov 2017 • Sanchari Sen, Shubham Jain, Swagath Venkataramani, Anand Raghunathan
SparCE consists of 2 key micro-architectural enhancements- a Sparsity Register File (SpRF) that tracks zero registers and a Sparsity aware Skip Address (SASA) table that indicates instructions to be skipped.
no code implementations • 1 Nov 2017 • Shubham Jain, Marco Gruteser
Second, we aim at identifying when a distracted user is about to enter the street, which can be used to support safety functions such as warning the user to be cautious.
no code implementations • 31 Jul 2017 • Jay Patravali, Shubham Jain, Sasank Chilamkurthy
In this paper, we develop a 2D and 3D segmentation pipelines for fully automated cardiac MR image segmentation using Deep Convolutional Neural Networks (CNN).
3 code implementations • 22 Dec 2016 • Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio
In this paper we propose a novel model for unconditional audio generation based on generating one audio sample at a time.