Search Results for author: Subrata Biswas

Found 4 papers, 2 papers with code

LLaSA: A Multimodal LLM for Human Activity Analysis Through Wearable and Smartphone Sensors

1 code implementation20 Jun 2024 Sheikh Asif Imran, Mohammad Nur Hossain Khan, Subrata Biswas, Bashima Islam

In this paper, we introduce LLaSA (Large Language and Sensor Assistant), a multimodal large language model built on LIMU-BERT and Llama, designed to interpret and answer queries related to human activities and motion analysis, leveraging sensor data and contextual reasoning.

16k Instruction Following +3

Missingness-resilient Video-enhanced Multimodal Disfluency Detection

1 code implementation11 Jun 2024 Payal Mohapatra, Shamika Likhite, Subrata Biswas, Bashima Islam, Qi Zhu

In experiments across five disfluency-detection tasks, our unified multimodal approach significantly outperforms Audio-only unimodal methods, yielding an average absolute improvement of 10% (i. e., 10 percentage point increase) when both video and audio modalities are always available, and 7% even when video modality is missing in half of the samples.

Memory-efficient Energy-adaptive Inference of Pre-Trained Models on Batteryless Embedded Systems

no code implementations16 May 2024 Pietro Farina, Subrata Biswas, Eren Yıldız, Khakim Akhunov, Saad Ahmed, Bashima Islam, Kasım Sinan Yıldırım

Recent works on compression mostly focus on time and memory, but often ignore energy dynamics or significantly reduce the accuracy of pre-trained DNNs.

RecNet: Early Attention Guided Feature Recovery

no code implementations18 Feb 2023 Subrata Biswas, Bashima Islam

Uncertainty in sensors results in corrupted input streams and hinders the performance of Deep Neural Networks (DNN), which focus on deducing information from data.

Event Detection Sound Event Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.