1 code implementation • NAACL 2022 • Rajat Agarwal, Varun Khurana, Karish Grover, Mukesh Mohania, Vikram Goyal
A Graph Transformer is used to prepare relation-specific token embeddings within each subgraph, then aggregated to obtain a subgraph representation.
no code implementations • 18 Nov 2023 • Varun Khurana, Yaman K Singla, Jayakumar Subramanian, Rajiv Ratn Shah, Changyou Chen, Zhiqiang Xu, Balaji Krishnamurthy
We show that BoigLLM outperforms 13x larger models such as GPT-3. 5 and GPT-4 in this task, demonstrating that while these state-of-the-art models can understand images, they lack information on how these images perform in the real world.
no code implementations • 13 Oct 2023 • Keaton Hamm, Varun Khurana
We consider structured approximation of measures in Wasserstein space $W_p(\mathbb{R}^d)$ for $p\in[1,\infty)$ by discrete and piecewise constant measures based on a scaled Voronoi partition of $\mathbb{R}^d$.
no code implementations • 14 Feb 2023 • Alexander Cloninger, Keaton Hamm, Varun Khurana, Caroline Moosmüller
We introduce LOT Wassmap, a computationally feasible algorithm to uncover low-dimensional structures in the Wasserstein space.
no code implementations • 11 Feb 2023 • Varun Khurana, Yaman Kumar Singla, Nora Hollenstein, Rajesh Kumar, Balaji Krishnamurthy
Feedback can be either explicit (e. g. ranking used in training language models) or implicit (e. g. using human cognitive signals in the form of eyetracking).
no code implementations • 25 Jan 2022 • Varun Khurana, Harish Kannan, Alexander Cloninger, Caroline Moosmüller
In this paper we study supervised learning tasks on the space of probability measures.