Search Results for author: Shubham Jain

Found 23 papers, 7 papers with code

Inherent Challenges of Post-Hoc Membership Inference for Large Language Models

no code implementations25 Jun 2024 Matthieu Meeus, Shubham Jain, Marek Rei, Yves-Alexandre de Montjoye

However, in this paper, we identify inherent challenges in post-hoc MIA evaluation due to potential distribution shifts between collected member and non-member datasets.

Experimental Design Memorization

A Lightweight Measure of Classification Difficulty from Application Dataset Characteristics

no code implementations9 Apr 2024 Bryan Bo Cao, Abhinav Sharma, Lawrence O'Gorman, Michael Coss, Shubham Jain

We show how this measure can help a practitioner select a computationally efficient model for a small dataset 6 to 29x faster than through repeated training and testing.

A Landmark-Aware Visual Navigation Dataset

no code implementations22 Feb 2024 Faith Johnson, Bryan Bo Cao, Kristin Dana, Shubham Jain, Ashwin Ashok

However, recent advancements in the visual navigation field face challenges due to the lack of human datasets in the real world for efficient supervised representation learning of the environments.

Representation Learning Visual Navigation

Feudal Networks for Visual Navigation

no code implementations19 Feb 2024 Faith Johnson, Bryan Bo Cao, Kristin Dana, Shubham Jain, Ashwin Ashok

We introduce a new approach to visual navigation using feudal learning, which employs a hierarchical structure consisting of a worker agent, a mid-level manager, and a high-level manager.

Navigate Visual Navigation

Has Your Pretrained Model Improved? A Multi-head Posterior Based Approach

no code implementations2 Jan 2024 Prince Aboagye, Yan Zheng, Junpeng Wang, Uday Singh Saini, Xin Dai, Michael Yeh, Yujie Fan, Zhongfang Zhuang, Shubham Jain, Liang Wang, Wei zhang

The emergence of pre-trained models has significantly impacted Natural Language Processing (NLP) and Computer Vision to relational datasets.

Did the Neurons Read your Book? Document-level Membership Inference for Large Language Models

1 code implementation23 Oct 2023 Matthieu Meeus, Shubham Jain, Marek Rei, Yves-Alexandre de Montjoye

First, we propose a procedure for the development and evaluation of document-level membership inference for LLMs by leveraging commonly used data sources for training and the model release date.

Misinformation Sentence

ViFiT: Reconstructing Vision Trajectories from IMU and Wi-Fi Fine Time Measurements

1 code implementation MobiCom ISACom 2023 Bryan Bo Cao, Abrar Alali, Hansi Liu, Nicholas Meegan, Marco Gruteser, Kristin Dana, Ashwin Ashok, Shubham Jain

Tracking subjects in videos is one of the most widely used functions in camera-based IoT applications such as security surveillance, smart city traffic safety enhancement, vehicle to pedestrian communication and so on.

Data-Side Efficiencies for Lightweight Convolutional Neural Networks

no code implementations24 Aug 2023 Bryan Bo Cao, Lawrence O'Gorman, Michael Coss, Shubham Jain

We examine how the choice of data-side attributes for two important visual tasks of image classification and object detection can aid in the choice or design of lightweight convolutional neural networks.

Image Classification Metric Learning +3

PDT: Pretrained Dual Transformers for Time-aware Bipartite Graphs

no code implementations2 Jun 2023 Xin Dai, Yujie Fan, Zhongfang Zhuang, Shubham Jain, Chin-Chia Michael Yeh, Junpeng Wang, Liang Wang, Yan Zheng, Prince Osei Aboagye, Wei zhang

Pre-training on large models is prevalent and emerging with the ever-growing user-generated content in many machine learning application categories.

Contrastive Learning

ViFiCon: Vision and Wireless Association Via Self-Supervised Contrastive Learning

no code implementations11 Oct 2022 Nicholas Meegan, Hansi Liu, Bryan Cao, Abrar Alali, Kristin Dana, Marco Gruteser, Shubham Jain, Ashwin Ashok

We introduce ViFiCon, a self-supervised contrastive learning scheme which uses synchronized information across vision and wireless modalities to perform cross-modal association.

Contrastive Learning Region Proposal

Vi-Fi: Associating Moving Subjects across Vision and Wireless Sensors

1 code implementation ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN) 2022 Hansi Liu, Abrar Alali, Mohamed Ibrahim, Bryan Bo Cao, Nicholas Meegan, Hongyu Li, Marco Gruteser, Shubham Jain, Kristin Dana, Ashwin Ashok, Bin Cheng, HongSheng Lu

In this paper, we present Vi-Fi, a multi-modal system that leverages a user’s smartphone WiFi Fine Timing Measurements (FTM) and inertial measurement unit (IMU) sensor data to associate the user detected on a camera footage with their corresponding smartphone identifier (e. g. WiFi MAC address).

Graph Matching Multimodal Association

RadioTransformer: A Cascaded Global-Focal Transformer for Visual Attention-guided Disease Classification

1 code implementation23 Feb 2022 Moinak Bhattacharya, Shubham Jain, Prateek Prasanna

RadioTransformer fills this critical gap by learning from radiologists' visual search patterns, encoded as 'human visual attention regions' in a cascaded global-focal transformer framework.

Learning-From-Disagreement: A Model Comparison and Visual Analytics Framework

no code implementations19 Jan 2022 Junpeng Wang, Liang Wang, Yan Zheng, Chin-Chia Michael Yeh, Shubham Jain, Wei zhang

With these metrics, one can easily identify meta-features with the most complementary behaviors in two classifiers, and use them to better ensemble the classifiers.

Binary Classification

Ax-BxP: Approximate Blocked Computation for Precision-Reconfigurable Deep Neural Network Acceleration

no code implementations25 Nov 2020 Reena Elangovan, Shubham Jain, Anand Raghunathan

To efficiently support precision re-configurability in DNN accelerators, we introduce an approximate computing method wherein DNN computations are performed block-wise (a block is a group of bits) and re-configurability is supported at the granularity of blocks.

TxSim:Modeling Training of Deep Neural Networks on Resistive Crossbar Systems

no code implementations25 Feb 2020 Sourjya Roy, Shrihari Sridharan, Shubham Jain, Anand Raghunathan

To address this challenge, there is a need for tools that can model the functional impact of non-idealities on DNN training and inference.

Computational Efficiency

TiM-DNN: Ternary in-Memory accelerator for Deep Neural Networks

no code implementations15 Sep 2019 Shubham Jain, Sumeet Kumar Gupta, Anand Raghunathan

The use of lower precision has emerged as a popular technique to optimize the compute and storage requirements of complex Deep Neural Networks (DNNs).

Image Classification Language Modelling

RxNN: A Framework for Evaluating Deep Neural Networks on Resistive Crossbars

no code implementations31 Aug 2018 Shubham Jain, Abhronil Sengupta, Kaushik Roy, Anand Raghunathan

We present RxNN, a fast and accurate simulation framework to evaluate large-scale DNNs on resistive crossbar systems.

SparCE: Sparsity aware General Purpose Core Extensions to Accelerate Deep Neural Networks

no code implementations7 Nov 2017 Sanchari Sen, Shubham Jain, Swagath Venkataramani, Anand Raghunathan

SparCE consists of 2 key micro-architectural enhancements- a Sparsity Register File (SpRF) that tracks zero registers and a Sparsity aware Skip Address (SASA) table that indicates instructions to be skipped.


Recognizing Textures with Mobile Cameras for Pedestrian Safety Applications

no code implementations1 Nov 2017 Shubham Jain, Marco Gruteser

Second, we aim at identifying when a distracted user is about to enter the street, which can be used to support safety functions such as warning the user to be cautious.

Material Recognition object-detection +1

2D-3D Fully Convolutional Neural Networks for Cardiac MR Segmentation

no code implementations31 Jul 2017 Jay Patravali, Shubham Jain, Sasank Chilamkurthy

In this paper, we develop a 2D and 3D segmentation pipelines for fully automated cardiac MR image segmentation using Deep Convolutional Neural Networks (CNN).

Image Segmentation Segmentation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.