Search Results for author: Ashwin Ashok

Found 9 papers, 4 papers with code

A Landmark-Aware Visual Navigation Dataset

no code implementations22 Feb 2024 Faith Johnson, Bryan Bo Cao, Kristin Dana, Shubham Jain, Ashwin Ashok

However, recent advancements in the visual navigation field face challenges due to the lack of human datasets in the real world for efficient supervised representation learning of the environments.

Representation Learning Visual Navigation

Feudal Networks for Visual Navigation

no code implementations19 Feb 2024 Faith Johnson, Bryan Bo Cao, Kristin Dana, Shubham Jain, Ashwin Ashok

We introduce a new approach to visual navigation using feudal learning, which employs a hierarchical structure consisting of a worker agent, a mid-level manager, and a high-level manager.

Navigate Visual Navigation

ViFiT: Reconstructing Vision Trajectories from IMU and Wi-Fi Fine Time Measurements

1 code implementation MobiCom ISACom 2023 Bryan Bo Cao, Abrar Alali, Hansi Liu, Nicholas Meegan, Marco Gruteser, Kristin Dana, Ashwin Ashok, Shubham Jain

Tracking subjects in videos is one of the most widely used functions in camera-based IoT applications such as security surveillance, smart city traffic safety enhancement, vehicle to pedestrian communication and so on.

ViFiCon: Vision and Wireless Association Via Self-Supervised Contrastive Learning

no code implementations11 Oct 2022 Nicholas Meegan, Hansi Liu, Bryan Cao, Abrar Alali, Kristin Dana, Marco Gruteser, Shubham Jain, Ashwin Ashok

We introduce ViFiCon, a self-supervised contrastive learning scheme which uses synchronized information across vision and wireless modalities to perform cross-modal association.

Contrastive Learning Region Proposal

Vi-Fi: Associating Moving Subjects across Vision and Wireless Sensors

1 code implementation ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN) 2022 Hansi Liu, Abrar Alali, Mohamed Ibrahim, Bryan Bo Cao, Nicholas Meegan, Hongyu Li, Marco Gruteser, Shubham Jain, Kristin Dana, Ashwin Ashok, Bin Cheng, HongSheng Lu

In this paper, we present Vi-Fi, a multi-modal system that leverages a user’s smartphone WiFi Fine Timing Measurements (FTM) and inertial measurement unit (IMU) sensor data to associate the user detected on a camera footage with their corresponding smartphone identifier (e. g. WiFi MAC address).

Graph Matching Multimodal Association

DeepLight: Robust & Unobtrusive Real-time Screen-Camera Communication for Real-World Displays

no code implementations11 May 2021 Vu Tran, Gihan Jayatilaka, Ashwin Ashok, Archan Misra

We show that a fully functional DeepLight system is able to robustly achieve high decoding accuracy (frame error rate < 0. 2) and moderately-high data goodput (>=0. 95Kbps) using a human-held smartphone camera, even over larger screen-camera distances (approx =2m).

object-detection Object Detection

A Data Set of Internet Claims and Comparison of their Sentiments with Credibility

1 code implementation22 Nov 2019 Amey Parundekar, Susan Elias, Ashwin Ashok

Furthermore, we intend to create this data set not only for classification of the news but also to find patterns that reason the intent behind misinformation.

Fact Checking General Classification +1

Optimal Radiometric Calibration for Camera-Display Communication

no code implementations8 Jan 2015 Wenjia Yuan, Eric Wengrowski, Kristin J. Dana, Ashwin Ashok, Marco Gruteser, Narayan Mandayam

We present a novel method for communicating between a camera and display by embedding and recovering hidden and dynamic information within a displayed image.

Cannot find the paper you are looking for? You can Submit a new open access paper.