Search Results for author: Kanika Narang

Found 8 papers, 1 papers with code

VisualLens: Personalization through Visual History

no code implementations25 Nov 2024 Wang Bill Zhu, Deqing Fu, Kai Sun, Yi Lu, Zhaojiang Lin, Seungwhan Moon, Kanika Narang, Mustafa Canim, Yue Liu, Anuj Kumar, Xin Luna Dong

We hypothesize that a user's visual history with images reflecting their daily life, offers valuable insights into their interests and preferences, and can be leveraged for personalization.

Diversity Recommendation Systems

CoDi: Conversational Distillation for Grounded Question Answering

no code implementations20 Aug 2024 Patrick Huber, Arash Einolghozati, Rylan Conway, Kanika Narang, Matt Smith, Waqar Nayyar, Adithya Sagar, Ahmed Aly, Akshat Shrivastava

This is a typical on-device scenario for specialist SLMs, allowing for open-domain model responses, without requiring the model to "memorize" world knowledge in its limited weights.

Question Answering World Knowledge

On the Equivalence of Graph Convolution and Mixup

1 code implementation29 Sep 2023 Xiaotian Han, Hanqing Zeng, Yu Chen, Shaoliang Nie, Jingzhou Liu, Kanika Narang, Zahra Shakeri, Karthik Abinav Sankararaman, Song Jiang, Madian Khabsa, Qifan Wang, Xia Hu

We establish this equivalence mathematically by demonstrating that graph convolution networks (GCN) and simplified graph convolution (SGC) can be expressed as a form of Mixup.

Data Augmentation Graph Neural Network

Meta-training with Demonstration Retrieval for Efficient Few-shot Learning

no code implementations30 Jun 2023 Aaron Mueller, Kanika Narang, Lambert Mathias, Qifan Wang, Hamed Firooz

Meta-training allows one to leverage smaller models for few-shot generalization in a domain-general and task-agnostic manner; however, these methods alone results in models that may not have sufficient parameterization or knowledge to adapt quickly to a large variety of tasks.

Few-Shot Learning QNLI +3

Measuring Self-Supervised Representation Quality for Downstream Classification using Discriminative Features

no code implementations3 Mar 2022 Neha Kalibhat, Kanika Narang, Hamed Firooz, Maziar Sanjabi, Soheil Feizi

Fine-tuning with Q-Score regularization can boost the linear probing accuracy of SSL models by up to 5. 8% on ImageNet-100 and 3. 7% on ImageNet-1K compared to their baselines.

Linear evaluation Self-Supervised Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.