Search Results for author: Navjot Singh

Found 11 papers, 2 papers with code

Representation Transfer Learning via Multiple Pre-trained models for Linear Regression

no code implementations25 May 2023 Navjot Singh, Suhas Diggavi

Assuming a representation structure for the data generating linear models at the sources and the target domains, we propose a representation transfer based learning method for constructing the target model.

regression Transfer Learning

Semantic rule Web-based Diagnosis and Treatment of Vector-Borne Diseases using SWRL rules

no code implementations8 Jan 2023 Ritesh Chandra, Sadhana Tiwari, Sonali Agarwal, Navjot Singh

Afterwards, Basic Formal Ontology (BFO), National Vector Borne Disease Control Program (NVBDCP) guidelines, and RDF medical data are used to develop ontologies for VBDs, and Semantic Web Rule Language (SWRL) rules are applied for diagnosis and treatment.

Optical Character Recognition (OCR)

Alternating Mahalanobis Distance Minimization for Stable and Accurate CP Decomposition

no code implementations14 Apr 2022 Navjot Singh, Edgar Solomonik

Computing these critical points in an alternating manner motivates an alternating optimization algorithm which corresponds to alternating least squares algorithm in the matrix case.

Decentralized Multi-Task Stochastic Optimization With Compressed Communications

no code implementations23 Dec 2021 Navjot Singh, Xuanyu Cao, Suhas Diggavi, Tamer Basar

The paper develops algorithms and obtains performance bounds for two different models of local information availability at the nodes: (i) sample feedback, where each node has direct access to samples of the local random variable to evaluate its local cost, and (ii) bandit feedback, where samples of the random variables are not available, but only the values of the local cost functions at two random points close to the decision are available to each node.

Stochastic Optimization

QuPeD: Quantized Personalization via Distillation with Applications to Federated Learning

no code implementations NeurIPS 2021 Kaan Ozkara, Navjot Singh, Deepesh Data, Suhas Diggavi

In this work, we introduce a \textit{quantized} and \textit{personalized} FL algorithm QuPeD that facilitates collective (personalized model compression) training via \textit{knowledge distillation} (KD) among clients who have access to heterogeneous data and resources.

Federated Learning Knowledge Distillation +2

ATD: Augmenting CP Tensor Decomposition by Self Supervision

1 code implementation15 Jun 2021 Chaoqi Yang, Cheng Qian, Navjot Singh, Cao Xiao, M Brandon Westover, Edgar Solomonik, Jimeng Sun

This paper addresses the above challenges by proposing augmented tensor decomposition (ATD), which effectively incorporates data augmentations and self-supervised learning (SSL) to boost downstream classification.

Data Augmentation Dimensionality Reduction +3

MTC: Multiresolution Tensor Completion from Partial and Coarse Observations

1 code implementation14 Jun 2021 Chaoqi Yang, Navjot Singh, Cao Xiao, Cheng Qian, Edgar Solomonik, Jimeng Sun

Our MTC model explores tensor mode properties and leverages the hierarchy of resolutions to recursively initialize an optimization setup, and optimizes on the coupled system using alternating least squares.

QuPeL: Quantized Personalization with Applications to Federated Learning

no code implementations23 Feb 2021 Kaan Ozkara, Navjot Singh, Deepesh Data, Suhas Diggavi

When each client participating in the (federated) learning process has different requirements of the quantized model (both in value and precision), we formulate a quantized personalization framework by introducing a penalty term for local client objectives against a globally trained model to encourage collaboration.

Federated Learning Quantization

SQuARM-SGD: Communication-Efficient Momentum SGD for Decentralized Optimization

no code implementations13 May 2020 Navjot Singh, Deepesh Data, Jemin George, Suhas Diggavi

In this paper, we propose and analyze SQuARM-SGD, a communication-efficient algorithm for decentralized training of large-scale machine learning models over a network.

SPARQ-SGD: Event-Triggered and Compressed Communication in Decentralized Stochastic Optimization

no code implementations31 Oct 2019 Navjot Singh, Deepesh Data, Jemin George, Suhas Diggavi

In this paper, we propose and analyze SPARQ-SGD, which is an event-triggered and compressed algorithm for decentralized training of large-scale machine learning models.

Quantization Stochastic Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.