Search Results for author: Karan Singhal

Found 23 papers, 4 papers with code

Federated Reconstruction: Partially Local Federated Learning

3 code implementations NeurIPS 2021 Karan Singhal, Hakim Sidahmed, Zachary Garrett, Shanshan Wu, Keith Rush, Sushant Prakash

We also describe the successful deployment of this approach at scale for federated collaborative filtering in a mobile keyboard application.

Collaborative Filtering Federated Learning +1

What Do We Mean by Generalization in Federated Learning?

1 code implementation ICLR 2022 Honglin Yuan, Warren Morningstar, Lin Ning, Karan Singhal

Thus generalization studies in federated learning should separate performance gaps from unseen client data (out-of-sample gap) from performance gaps from unseen client distributions (participation gap).

Federated Learning

Learning Multilingual Word Embeddings Using Image-Text Data

no code implementations WS 2019 Karan Singhal, Karthik Raman, Balder ten Cate

There has been significant interest recently in learning multilingual word embeddings -- in which semantically similar words across languages have similar embeddings.

Multilingual Word Embeddings Semantic Similarity +1

Presence of Women in Economics Academia: Evidence from India

no code implementations31 Oct 2020 Ambrish Dongre, Karan Singhal, Upasak Das

This paper documents the representation of women in Economics academia in India by analyzing the share of women in faculty positions, and their participation in a prestigious conference held annually.

Solving it correctly Prevalence and Persistence of Gender Gap in Basic Mathematics in rural India

no code implementations28 Oct 2021 Upasak Das, Karan Singhal

Mathematical ability is among the most important determinants of prospering in the labour market.

Mixed Federated Learning: Joint Decentralized and Centralized Learning

no code implementations26 May 2022 Sean Augenstein, Andrew Hard, Lin Ning, Karan Singhal, Satyen Kale, Kurt Partridge, Rajiv Mathews

For example, additional datacenter data can be leveraged to jointly learn from centralized (datacenter) and decentralized (federated) training data and better match an expected inference data distribution.

Federated Learning

Federated Training of Dual Encoding Models on Small Non-IID Client Datasets

no code implementations30 Sep 2022 Raviteja Vemulapalli, Warren Richard Morningstar, Philip Andrew Mansfield, Hubert Eichner, Karan Singhal, Arash Afkanpour, Bradley Green

In this work, we focus on federated training of dual encoding models on decentralized data composed of many small, non-IID (independent and identically distributed) client datasets.

Federated Learning Representation Learning

Towards Federated Learning Under Resource Constraints via Layer-wise Training and Depth Dropout

no code implementations11 Sep 2023 Pengfei Guo, Warren Richard Morningstar, Raviteja Vemulapalli, Karan Singhal, Vishal M. Patel, Philip Andrew Mansfield

To mitigate this issue and facilitate training of large models on edge devices, we introduce a simple yet effective strategy, Federated Layer-wise Learning, to simultaneously reduce per-client memory, computation, and communication costs.

Federated Learning Representation Learning +1

Random Field Augmentations for Self-Supervised Representation Learning

no code implementations7 Nov 2023 Philip Andrew Mansfield, Arash Afkanpour, Warren Richard Morningstar, Karan Singhal

In this work, we propose a new family of local transformations based on Gaussian random fields to generate image augmentations for self-supervised representation learning.

Representation Learning

Towards Accurate Differential Diagnosis with Large Language Models

no code implementations30 Nov 2023 Daniel McDuff, Mike Schaekermann, Tao Tu, Anil Palepu, Amy Wang, Jake Garrison, Karan Singhal, Yash Sharma, Shekoofeh Azizi, Kavita Kulkarni, Le Hou, Yong Cheng, Yun Liu, S Sara Mahdavi, Sushant Prakash, Anupam Pathak, Christopher Semturs, Shwetak Patel, Dale R Webster, Ewa Dominowska, Juraj Gottweis, Joelle Barral, Katherine Chou, Greg S Corrado, Yossi Matias, Jake Sunshine, Alan Karthikesalingam, Vivek Natarajan

Comparing the two assisted study arms, the DDx quality score was higher for clinicians assisted by our LLM (top-10 accuracy 51. 7%) compared to clinicians without its assistance (36. 1%) (McNemar's Test: 45. 7, p < 0. 01) and clinicians with search (44. 4%) (4. 75, p = 0. 03).

Disentangling the Effects of Data Augmentation and Format Transform in Self-Supervised Learning of Image Representations

no code implementations2 Dec 2023 Neha Kalibhat, Warren Morningstar, Alex Bijamov, Luyang Liu, Karan Singhal, Philip Mansfield

We define augmentations in frequency space called Fourier Domain Augmentations (FDA) and show that training SSL models on a combination of these and image augmentations can improve the downstream classification accuracy by up to 1. 3% on ImageNet-1K.

Data Augmentation Self-Supervised Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.