Search Results for author: Fahim Kawsar

Found 23 papers, 8 papers with code

Using Self-supervised Learning Can Improve Model Fairness

1 code implementation4 Jun 2024 Sofia Yfantidou, Dimitris Spathis, Marios Constantinides, Athena Vakali, Daniele Quercia, Fahim Kawsar

Self-supervised learning (SSL) has become the de facto training paradigm of large models, where pre-training is followed by supervised fine-tuning using domain-specific data and labels.

Fairness Self-Supervised Learning

Time-bound Contextual Bio-ID Generation for Minimalist Wearables

no code implementations1 Mar 2024 Adiba Orzikulova, Diana A. Vasile, Fahim Kawsar, Chulhong Min

As wearable devices become increasingly miniaturized and powerful, a new opportunity arises for instant and dynamic device-to-device collaboration and human-to-device interaction.

Enabling Cross-Camera Collaboration for Video Analytics on Distributed Smart Cameras

no code implementations25 Jan 2024 Chulhong Min, Juheon Yi, Utku Gunay Acer, Fahim Kawsar

Overlapping cameras offer exciting opportunities to view a scene from different angles, allowing for more advanced, comprehensive and robust analysis.

Object

Balancing Continual Learning and Fine-tuning for Human Activity Recognition

no code implementations4 Jan 2024 Chi Ian Tang, Lorena Qendro, Dimitris Spathis, Fahim Kawsar, Akhil Mathur, Cecilia Mascolo

These schemes re-purpose contrastive learning for knowledge retention and, Kaizen combines that with self-training in a unified scheme that can leverage unlabelled and labelled data for continual learning.

Continual Learning Contrastive Learning +3

Evaluating Fairness in Self-supervised and Supervised Models for Sequential Data

no code implementations3 Jan 2024 Sofia Yfantidou, Dimitris Spathis, Marios Constantinides, Athena Vakali, Daniele Quercia, Fahim Kawsar

Self-supervised learning (SSL) has become the de facto training paradigm of large models where pre-training is followed by supervised fine-tuning using domain-specific data and labels.

Fairness Self-Supervised Learning

Synergy: Towards On-Body AI via Tiny AI Accelerator Collaboration on Wearables

no code implementations11 Dec 2023 Taesik Gong, Si Young Jang, Utku Günay Acer, Fahim Kawsar, Chulhong Min

The advent of tiny artificial intelligence (AI) accelerators enables AI to run at the extreme edge, offering reduced latency, lower power cost, and improved privacy.

Collaborative Inference

Salted Inference: Enhancing Privacy while Maintaining Efficiency of Split Inference in Mobile Computing

1 code implementation20 Oct 2023 Mohammad Malekzadeh, Fahim Kawsar

In split inference, a deep neural network (DNN) is partitioned to run the early part of the DNN at the edge and the later part of the DNN in the cloud.

The first step is the hardest: Pitfalls of Representing and Tokenizing Temporal Data for Large Language Models

no code implementations12 Sep 2023 Dimitris Spathis, Fahim Kawsar

Here, we discuss recent works that employ LLMs for human-centric tasks such as in mobile health sensing and present a case study showing that popular LLMs tokenize temporal data incorrectly.

Towards personalised music-therapy; a neurocomputational modelling perspective

no code implementations15 May 2023 Nicole Lai, Marios Philiastides, Fahim Kawsar, Fani Deligianni

In particular, the direct interaction of auditory with the motor and the reward system via a predictive framework explains the efficacy of music-based interventions in motor rehabilitation.

Kaizen: Practical Self-supervised Continual Learning with Continual Fine-tuning

1 code implementation30 Mar 2023 Chi Ian Tang, Lorena Qendro, Dimitris Spathis, Fahim Kawsar, Cecilia Mascolo, Akhil Mathur

Kaizen is able to balance the trade-off between knowledge retention and learning from new data with an end-to-end model, paving the way for practical deployment of continual learning systems.

Continual Learning Knowledge Distillation +1

Enhancing Efficiency in Multidevice Federated Learning through Data Selection

1 code implementation8 Nov 2022 Fan Mo, Mohammad Malekzadeh, Soumyajit Chatterjee, Fahim Kawsar, Akhil Mathur

Federated learning (FL) in multidevice environments creates new opportunities to learn from a vast and diverse amount of private data.

Federated Learning

FLAME: Federated Learning Across Multi-device Environments

no code implementations17 Feb 2022 Hyunsung Cho, Akhil Mathur, Fahim Kawsar

Federated Learning (FL) enables distributed training of machine learning models while keeping personal data on user devices private.

Federated Learning Human Activity Recognition

ColloSSL: Collaborative Self-Supervised Learning for Human Activity Recognition

no code implementations1 Feb 2022 Yash Jain, Chi Ian Tang, Chulhong Min, Fahim Kawsar, Akhil Mathur

In this paper, we extend this line of research and present a novel technique called Collaborative Self-Supervised Learning (ColloSSL) which leverages unlabeled data collected from multiple devices worn by a user to learn high-quality features of the data.

Contrastive Learning Human Activity Recognition +2

Tiny, always-on and fragile: Bias propagation through design choices in on-device machine learning workflows

1 code implementation19 Jan 2022 Wiebke Toussaint, Aaron Yi Ding, Fahim Kawsar, Akhil Mathur

Billions of distributed, heterogeneous and resource constrained IoT devices deploy on-device machine learning (ML) for private, fast and offline inference on personal data.

Keyword Spotting

SensiX++: Bringing MLOPs and Multi-tenant Model Serving to Sensory Edge Devices

no code implementations8 Sep 2021 Chulhong Min, Akhil Mathur, Utku Gunay Acer, Alessandro Montanari, Fahim Kawsar

We present SensiX++ - a multi-tenant runtime for adaptive model execution with integrated MLOps on edge devices, e. g., a camera, a microphone, or IoT sensors.

Scaling Unsupervised Domain Adaptation through Optimal Collaborator Selection and Lazy Discriminator Synchronization

no code implementations1 Jan 2021 Akhil Mathur, Shaoduo Gan, Anton Isopoussu, Fahim Kawsar, Nadia Berthouze, Nicholas Donald Lane

Breakthroughs in unsupervised domain adaptation (uDA) have opened up the possibility of adapting models from a label-rich source domain to unlabeled target domains.

Privacy Preserving Unsupervised Domain Adaptation

SensiX: A Platform for Collaborative Machine Learning on the Edge

no code implementations4 Dec 2020 Chulhong Min, Akhil Mathur, Alessandro Montanari, Utku Gunay Acer, Fahim Kawsar

The emergence of multiple sensory devices on or near a human body is uncovering new dynamics of extreme edge computing.

BIG-bench Machine Learning Edge-computing

Libri-Adapt: A New Speech Dataset for Unsupervised Domain Adaptation

1 code implementation6 Sep 2020 Akhil Mathur, Fahim Kawsar, Nadia Berthouze, Nicholas D. Lane

This paper introduces a new dataset, Libri-Adapt, to support unsupervised domain adaptation research on speech recognition models.

speech-recognition Speech Recognition +1

Mic2Mic: Using Cycle-Consistent Generative Adversarial Networks to Overcome Microphone Variability in Speech Systems

no code implementations27 Mar 2020 Akhil Mathur, Anton Isopoussu, Fahim Kawsar, Nadia Berthouze, Nicholas D. Lane

A major challenge in building systems that combine audio models with commodity microphones is to guarantee their accuracy and robustness in the real-world.

Multi-Step Decentralized Domain Adaptation

no code implementations25 Sep 2019 Akhil Mathur, Shaoduo Gan, Anton Isopoussu, Fahim Kawsar, Nadia Berthouze, Nicholas D. Lane

Despite the recent breakthroughs in unsupervised domain adaptation (uDA), no prior work has studied the challenges of applying these methods in practical machine learning scenarios.

Privacy Preserving Unsupervised Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.