no code implementations • 7 Jun 2023 • Rishikesh Jha, Siddharth Subramaniyam, Ethan Benjamin, Thrivikrama Taula
In this paper, we share our novel approach to address both: the semantic gap problem followed by an end to end trained model for personalized semantic retrieval.
1 code implementation • EMNLP 2020 • Trapit Bansal, Rishikesh Jha, Tsendsuren Munkhdalai, Andrew McCallum
We meta-train a transformer model on this distribution of tasks using a recent meta-learning framework.
2 code implementations • COLING 2020 • Trapit Bansal, Rishikesh Jha, Andrew McCallum
LEOPARD is trained with the state-of-the-art transformer architecture and shows better generalization to tasks not seen at all during training, with as few as 4 examples per label.