1 code implementation • 22 Feb 2024 • Kezhi Kong, Jiani Zhang, Zhengyuan Shen, Balasubramaniam Srinivasan, Chuan Lei, Christos Faloutsos, Huzefa Rangwala, George Karypis
Large Language Models (LLMs) trained on large volumes of data excel at various natural language tasks, but they cannot handle tasks requiring knowledge that has not been trained on previously.
1 code implementation • 30 Oct 2023 • Costas Mavromatis, Balasubramaniam Srinivasan, Zhengyuan Shen, Jiani Zhang, Huzefa Rangwala, Christos Faloutsos, George Karypis
Large Language Models (LLMs) can adapt to new tasks via in-context learning (ICL).
1 code implementation • 19 Oct 2023 • Jiani Zhang, Zhengyuan Shen, Balasubramaniam Srinivasan, Shen Wang, Huzefa Rangwala, George Karypis
Recent advances in large language models have revolutionized many sectors, including the database industry.
1 code implementation • 14 Oct 2023 • Hengrui Zhang, Jiani Zhang, Balasubramaniam Srinivasan, Zhengyuan Shen, Xiao Qin, Christos Faloutsos, Huzefa Rangwala, George Karypis
Recent advances in tabular data generation have greatly enhanced synthetic data quality.
1 code implementation • 5 Oct 2023 • Zifeng Wang, Zichen Wang, Balasubramaniam Srinivasan, Vassilis N. Ioannidis, Huzefa Rangwala, Rishita Anubhai
Foundation models (FMs) are able to leverage large volumes of unlabeled data to demonstrate superior performance across a wide range of tasks.
1 code implementation • NeurIPS 2023 • Pei Chen, Soumajyoti Sarkar, Leonard Lausen, Balasubramaniam Srinivasan, Sheng Zha, Ruihong Huang, George Karypis
Language models pretrained on large collections of tabular data have demonstrated their effectiveness in several downstream tasks.
1 code implementation • ICLR 2022 • Beatrice Bevilacqua, Fabrizio Frasca, Derek Lim, Balasubramaniam Srinivasan, Chen Cai, Gopinath Balamurugan, Michael M. Bronstein, Haggai Maron
Thus, we propose to represent each graph as a set of subgraphs derived by some predefined policy, and to process it using a suitable equivariant architecture.
no code implementations • 19 Jan 2021 • Balasubramaniam Srinivasan, Da Zheng, George Karypis
In this work, we exploit the incidence structure to develop a hypergraph neural network to learn provably expressive representations of variable sized hyperedges which preserve local-isomorphism in the line graph of the hypergraph, while also being invariant to permutations of its constituent vertices.
1 code implementation • ICLR 2020 • Balasubramaniam Srinivasan, Bruno Ribeiro
This work provides the first unifying theoretical framework for node (positional) embeddings and structural graph representations, bridging methods like matrix factorization and graph neural networks.
1 code implementation • 6 Mar 2019 • Ryan L. Murphy, Balasubramaniam Srinivasan, Vinayak Rao, Bruno Ribeiro
This work generalizes graph neural networks (GNNs) beyond those based on the Weisfeiler-Lehman (WL) algorithm, graph Laplacians, and diffusions.
Ranked #5 on Drug Discovery on MUV
2 code implementations • ICLR 2019 • Ryan L. Murphy, Balasubramaniam Srinivasan, Vinayak Rao, Bruno Ribeiro
We consider a simple and overarching representation for permutation-invariant functions of sequences (or multiset functions).