1 code implementation • 15 Jun 2021 • Chaoqi Yang, Cheng Qian, Navjot Singh, Cao Xiao, M Brandon Westover, Edgar Solomonik, Jimeng Sun
This paper addresses the above challenges by proposing augmented tensor decomposition (ATD), which effectively incorporates data augmentations and self-supervised learning (SSL) to boost downstream classification.
1 code implementation • 14 Jun 2021 • Chaoqi Yang, Navjot Singh, Cao Xiao, Cheng Qian, Edgar Solomonik, Jimeng Sun
Our MTC model explores tensor mode properties and leverages the hierarchy of resolutions to recursively initialize an optimization setup, and optimizes on the coupled system using alternating least squares.
no code implementations • 31 Oct 2019 • Navjot Singh, Deepesh Data, Jemin George, Suhas Diggavi
In this paper, we propose and analyze SPARQ-SGD, which is an event-triggered and compressed algorithm for decentralized training of large-scale machine learning models.
no code implementations • 13 May 2020 • Navjot Singh, Deepesh Data, Jemin George, Suhas Diggavi
In this paper, we propose and analyze SQuARM-SGD, a communication-efficient algorithm for decentralized training of large-scale machine learning models over a network.
no code implementations • 23 Feb 2021 • Kaan Ozkara, Navjot Singh, Deepesh Data, Suhas Diggavi
When each client participating in the (federated) learning process has different requirements of the quantized model (both in value and precision), we formulate a quantized personalization framework by introducing a penalty term for local client objectives against a globally trained model to encourage collaboration.
no code implementations • NeurIPS 2021 • Kaan Ozkara, Navjot Singh, Deepesh Data, Suhas Diggavi
In this work, we introduce a \textit{quantized} and \textit{personalized} FL algorithm QuPeD that facilitates collective (personalized model compression) training via \textit{knowledge distillation} (KD) among clients who have access to heterogeneous data and resources.
no code implementations • 23 Dec 2021 • Navjot Singh, Xuanyu Cao, Suhas Diggavi, Tamer Basar
The paper develops algorithms and obtains performance bounds for two different models of local information availability at the nodes: (i) sample feedback, where each node has direct access to samples of the local random variable to evaluate its local cost, and (ii) bandit feedback, where samples of the random variables are not available, but only the values of the local cost functions at two random points close to the decision are available to each node.
no code implementations • 3 Apr 2022 • Ritesh Chandra, Kumar Abhishek, Sonali Agarwal, Navjot Singh
Fire weather indices are widely used to measure fire danger and are used to issue bushfire warnings.
no code implementations • 14 Apr 2022 • Navjot Singh, Edgar Solomonik
Computing these critical points in an alternating manner motivates an alternating optimization algorithm which corresponds to alternating least squares algorithm in the matrix case.
no code implementations • 8 Jan 2023 • Ritesh Chandra, Sadhana Tiwari, Sonali Agarwal, Navjot Singh
Afterwards, Basic Formal Ontology (BFO), National Vector Borne Disease Control Program (NVBDCP) guidelines, and RDF medical data are used to develop ontologies for VBDs, and Semantic Web Rule Language (SWRL) rules are applied for diagnosis and treatment.
Optical Character Recognition Optical Character Recognition (OCR)
no code implementations • 25 May 2023 • Navjot Singh, Suhas Diggavi
Assuming a representation structure for the data generating linear models at the sources and the target domains, we propose a representation transfer based learning method for constructing the target model.