1 code implementation • EMNLP 2021 • Saurav Manchanda, George Karypis
Quantitatively measuring the impact-related aspects of scientific, engineering, and technological (SET) innovations is a fundamental problem with broad applications.
no code implementations • 29 Oct 2024 • Yingheng Wang, Zichen Wang, Gil Sadeh, Luca Zancato, Alessandro Achille, George Karypis, Huzefa Rangwala
Self-supervised training of language models (LMs) has seen great success for protein sequences in learning meaningful representations and for generative drug design.
no code implementations • 18 Oct 2024 • Zhepeng Cen, Yao Liu, Siliang Zeng, Pratik Chaudhar, Huzefa Rangwala, George Karypis, Rasool Fakoor
Our first approach is Batch-Scheduled Sampling, where, during training, we stochastically choose between the ground-truth token from the dataset and the model's own generated token as input to predict the next token.
1 code implementation • 17 Oct 2024 • Zeren Shui, Petros Karypis, Daniel S. Karls, Mingjian Wen, Saurav Manchanda, Ellad B. Tadmor, George Karypis
In this paper, we propose a multi-task learning (MTL) framework that jointly fine-tunes PLMs on a dataset of primary interest together with multiple auxiliary CIC datasets to take advantage of additional supervision signals.
no code implementations • 17 Oct 2024 • Ke Yang, Yao Liu, Sapana Chaudhary, Rasool Fakoor, Pratik Chaudhari, George Karypis, Huzefa Rangwala
On the other hand, there has been limited study on the misalignment between a web agent's observation/action representation and the pre-training data of the LLM it's based on.
no code implementations • 2 Sep 2024 • Soumajyoti Sarkar, Leonard Lausen, Volkan Cevher, Sheng Zha, Thomas Brox, George Karypis
Sparse Mixture of Expert (SMoE) models have emerged as a scalable alternative to dense models in language modeling.
no code implementations • 12 Jul 2024 • Youngsuk Park, Kailash Budhathoki, Liangfu Chen, Jonas Kübler, Jiaji Huang, Matthäus Kleindessner, Jun Huan, Volkan Cevher, Yida Wang, George Karypis
Powerful foundation models, including large language models (LLMs), with Transformer architectures have ushered in a new era of Generative AI across various industries.
2 code implementations • 19 Jun 2024 • Rami Aly, Zhiqiang Tang, Samson Tan, George Karypis
Large Language Models (LLMs) frequently hallucinate, impeding their reliability in mission-critical situations.
no code implementations • 13 Jun 2024 • Shichang Zhang, Da Zheng, Jiani Zhang, Qi Zhu, Xiang Song, Soji Adeshina, Christos Faloutsos, George Karypis, Yizhou Sun
Large Language Models (LLMs), noted for their superior text understanding abilities, offer a solution for processing the text in graphs but face integration challenges due to their limitation for encoding graph structures and their computational complexities when dealing with extensive text in large neighborhoods of interconnected nodes.
1 code implementation • 10 Jun 2024 • Da Zheng, Xiang Song, Qi Zhu, Jian Zhang, Theodore Vasiloudis, Runjie Ma, Houyu Zhang, Zichen Wang, Soji Adeshina, Israt Nisa, Alejandro Mottini, Qingjun Cui, Huzefa Rangwala, Belinda Zeng, Christos Faloutsos, George Karypis
GraphStorm has the following desirable properties: (a) Easy to use: it can perform graph construction and model training and inference with just a single command; (b) Expert-friendly: GraphStorm contains many advanced GML modeling techniques to handle complex graph data and improve model performance; (c) Scalable: every component in GraphStorm can operate on graphs with billions of nodes and can scale model training and inference to different hardware without changing any code.
1 code implementation • 30 May 2024 • Costas Mavromatis, George Karypis
In our GNN-RAG framework, the GNN acts as a dense subgraph reasoner to extract useful graph information, while the LLM leverages its natural language processing ability for ultimate KGQA.
no code implementations • 28 Apr 2024 • Qi Zhu, Da Zheng, Xiang Song, Shichang Zhang, Bowen Jin, Yizhou Sun, George Karypis
Inspired by this, we introduce Graph-aware Parameter-Efficient Fine-Tuning - GPEFT, a novel approach for efficient graph representation learning with LLMs on text-rich graphs.
no code implementations • 24 Apr 2024 • Zhiqiang Tang, Haoyang Fang, Su Zhou, Taojiannan Yang, Zihan Zhong, Tony Hu, Katrin Kirchhoff, George Karypis
AutoGluon-Multimodal (AutoMM) is introduced as an open-source AutoML library designed specifically for multimodal learning.
1 code implementation • 17 Apr 2024 • Costas Mavromatis, Petros Karypis, George Karypis
PackLLM performs model fusion by solving an optimization problem for determining each LLM's importance, so that perplexity over the input prompt is minimized.
2 code implementations • 28 Feb 2024 • Zhiqi Bu, Xinwei Zhang, Mingyi Hong, Sheng Zha, George Karypis
The superior performance of large foundation models relies on the use of massive amounts of high-quality data, which often contain sensitive, private and copyrighted material that requires formal protection.
no code implementations • 27 Feb 2024 • Vyas Raina, Samson Tan, Volkan Cevher, Aditya Rawal, Sheng Zha, George Karypis
Deep learning-based Natural Language Processing (NLP) models are vulnerable to adversarial attacks, where small perturbations can cause a model to misclassify.
1 code implementation • 22 Feb 2024 • Kezhi Kong, Jiani Zhang, Zhengyuan Shen, Balasubramaniam Srinivasan, Chuan Lei, Christos Faloutsos, Huzefa Rangwala, George Karypis
Large Language Models (LLMs) trained on large volumes of data excel at various natural language tasks, but they cannot handle tasks requiring knowledge that has not been trained on previously.
no code implementations • 3 Feb 2024 • Costas Mavromatis, Petros Karypis, George Karypis
Our method, termed SemPool, represents KG facts with pre-trained LMs, learns to aggregate their semantic information, and fuses it at different layers of the LM.
no code implementations • 20 Nov 2023 • Zhiqi Bu, Justin Chiu, Ruixuan Liu, Sheng Zha, George Karypis
Deep learning using large models have achieved great success in a wide range of domains.
1 code implementation • 30 Oct 2023 • Costas Mavromatis, Balasubramaniam Srinivasan, Zhengyuan Shen, Jiani Zhang, Huzefa Rangwala, Christos Faloutsos, George Karypis
Large Language Models (LLMs) can adapt to new tasks via in-context learning (ICL).
no code implementations • 30 Oct 2023 • Zhiqi Bu, Ruixuan Liu, Yu-Xiang Wang, Sheng Zha, George Karypis
Recent advances have substantially improved the accuracy, memory cost, and training speed of differentially private (DP) deep learning, especially on large vision and language models with millions to billions of parameters.
no code implementations • 23 Oct 2023 • Petros Karypis, Julian McAuley, George Karypis
Our method benefits both models trained with absolute positional embeddings, by extending their input contexts, as well as popular relative positional embedding methods showing a reduced perplexity on sequences longer than they were trained on.
1 code implementation • 19 Oct 2023 • Jiani Zhang, Zhengyuan Shen, Balasubramaniam Srinivasan, Shen Wang, Huzefa Rangwala, George Karypis
Recent advances in large language models have revolutionized many sectors, including the database industry.
1 code implementation • 14 Oct 2023 • Hengrui Zhang, Jiani Zhang, Balasubramaniam Srinivasan, Zhengyuan Shen, Xiao Qin, Christos Faloutsos, Huzefa Rangwala, George Karypis
Recent advances in tabular data generation have greatly enhanced synthetic data quality.
no code implementations • 2 Oct 2023 • Ruixuan Liu, Zhiqi Bu, Yu-Xiang Wang, Sheng Zha, George Karypis
The success of large neural networks is crucially determined by the availability of data.
1 code implementation • NeurIPS 2023 • Pei Chen, Soumajyoti Sarkar, Leonard Lausen, Balasubramaniam Srinivasan, Sheng Zha, Ruihong Huang, George Karypis
Language models pretrained on large collections of tabular data have demonstrated their effectiveness in several downstream tasks.
no code implementations • 14 Jul 2023 • Hongkuan Zhou, Da Zheng, Xiang Song, George Karypis, Viktor Prasanna
Evenworse, the tremendous overhead to synchronize the node memory make it impractical to be deployed to distributed GPU clusters.
1 code implementation • NeurIPS 2023 • Tuan Dinh, Jinman Zhao, Samson Tan, Renato Negrinho, Leonard Lausen, Sheng Zha, George Karypis
We find that the presence of potential bugs significantly degrades the generation performance of the high-performing Code-LLMs.
no code implementations • 1 Jun 2023 • Hengzhi Pei, Jinman Zhao, Leonard Lausen, Sheng Zha, George Karypis
To better solve this task, we query a program analyzer for information relevant to a given function call, and consider ways to provide the analyzer results to different code completion models during inference and training.
1 code implementation • 10 May 2023 • Bingzhao Zhu, Xingjian Shi, Nick Erickson, Mu Li, George Karypis, Mahsa Shoaran
The success of self-supervised learning in computer vision and natural language processing has motivated pretraining methods on tabular data.
1 code implementation • 20 Apr 2023 • Costas Mavromatis, Vassilis N. Ioannidis, Shen Wang, Da Zheng, Soji Adeshina, Jun Ma, Han Zhao, Christos Faloutsos, George Karypis
Different from conventional knowledge distillation, GRAD jointly optimizes a GNN teacher and a graph-free student over the graph's nodes via a shared LM.
no code implementations • 13 Feb 2023 • Danilo Ribeiro, Shen Wang, Xiaofei Ma, Henry Zhu, Rui Dong, Deguang Kong, Juliette Burger, Anjelica Ramos, William Wang, Zhiheng Huang, George Karypis, Bing Xiang, Dan Roth
We introduce STREET, a unified multi-task and multi-domain natural language reasoning and explanation benchmark.
3 code implementations • 2 Feb 2023 • Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao, George Karypis, Alex Smola
Experimental results on ScienceQA and A-OKVQA benchmark datasets show the effectiveness of our proposed approach.
Ranked #4 on Science Question Answering on ScienceQA
no code implementations • 31 Jan 2023 • Hengrui Zhang, Shen Wang, Vassilis N. Ioannidis, Soji Adeshina, Jiani Zhang, Xiao Qin, Christos Faloutsos, Da Zheng, George Karypis, Philip S. Yu
Graph Neural Networks (GNNs) are currently dominating in modeling graph-structure data, while their high reliance on graph structure for inference significantly impedes them from widespread applications.
no code implementations • 10 Dec 2022 • Chaoyang He, Shuai Zheng, Aston Zhang, George Karypis, Trishul Chilimbi, Mahdi Soltanolkotabi, Salman Avestimehr
The mixture of Expert (MoE) parallelism is a recent advancement that scales up the model size with constant computational cost.
1 code implementation • 24 Oct 2022 • Costas Mavromatis, George Karypis
Our method, termed ReaRev, introduces a new way to KGQA reasoning with respect to both instruction decoding and execution.
Ranked #1 on Semantic Parsing on WebQuestionsSP
1 code implementation • 14 Oct 2022 • Zeren Shui, Daniel S. Karls, Mingjian Wen, Ilia A. Nikiforov, Ellad B. Tadmor, George Karypis
In recent years, neural network (NN)-based potentials trained on quantum mechanical (DFT-labeled) data have emerged as a more accurate alternative to conventional EIPs.
1 code implementation • 30 Sep 2022 • Yulun Wu, Robert A. Barton, Zichen Wang, Vassilis N. Ioannidis, Carlo De Donno, Layne C. Price, Luis F. Voloch, George Karypis
Predicting the responses of a cell under perturbations may bring important benefits to drug discovery and personalized therapeutics.
2 code implementations • 30 Sep 2022 • Zhiqi Bu, Yu-Xiang Wang, Sheng Zha, George Karypis
Our implementation achieves state-of-the-art (SOTA) accuracy with very small extra cost: on GPT2 and at almost the same memory cost (<1% overhead), BK has 1. 03X the time complexity of the standard training (0. 83X training speed in practice), and 0. 61X the time complexity of the most efficient DP implementation (1. 36X training speed in practice).
2 code implementations • 30 Sep 2022 • Zhiqi Bu, Yu-Xiang Wang, Sheng Zha, George Karypis
We study the problem of differentially private (DP) fine-tuning of large pre-trained models -- a recent privacy-preserving approach suitable for solving downstream tasks with sensitive data.
2 code implementations • 13 Sep 2022 • Yulun Wu, Layne C. Price, Zichen Wang, Vassilis N. Ioannidis, Robert A. Barton, George Karypis
Estimating an individual's potential outcomes under counterfactual treatments is a challenging task for traditional causal inference and supervised learning approaches when the outcome is high-dimensional (e. g. gene expressions, impulse responses, human faces) and covariates are relatively limited.
no code implementations • 22 Jun 2022 • Vassilis N. Ioannidis, Xiang Song, Da Zheng, Houyu Zhang, Jun Ma, Yi Xu, Belinda Zeng, Trishul Chilimbi, George Karypis
The effectiveness in our framework is achieved by applying stage-wise fine-tuning of the BERT model first with heterogenous graph information and then with a GNN model.
no code implementations • 21 Jun 2022 • Chunxing Yin, Da Zheng, Israt Nisa, Christos Faloutos, George Karypis, Richard Vuduc
This paper describes a new method for representing embedding tables of graph neural networks (GNNs) more compactly via tensor-train (TT) decomposition.
no code implementations • 9 Jun 2022 • Zhenwei Dai, Vasileios Ioannidis, Soji Adeshina, Zak Jost, Christos Faloutsos, George Karypis
ScatterSample employs a sampling module termed DiverseUncertainty to collect instances with large uncertainty from different regions of the sample space for labeling.
no code implementations • NAACL 2022 • Vishakh Padmakumar, Leonard Lausen, Miguel Ballesteros, Sheng Zha, He He, George Karypis
Recent work has found that multi-task training with a large number of diverse tasks can uniformly improve downstream performance on unseen target tasks.
no code implementations • 4 Apr 2022 • Jiacheng Li, Tong Zhao, Jin Li, Jim Chan, Christos Faloutsos, George Karypis, Soo-Min Pantel, Julian McAuley
We propose to model user dynamics from shopping intents and interacted items simultaneously.
2 code implementations • 28 Mar 2022 • Hongkuan Zhou, Da Zheng, Israt Nisa, Vasileios Ioannidis, Xiang Song, George Karypis
Our temporal parallel sampler achieves an average of 173x speedup on a multi-core CPU compared with the baselines.
no code implementations • 18 Mar 2022 • Trong Nghia Hoang, Anoop Deoras, Tong Zhao, Jin Li, George Karypis
We develop and investigate a personalizable deep metric model that captures both the internal contents of items and how they were interacted with by users.
no code implementations • 22 Jan 2022 • Ancy Sarah Tom, Nesreen K. Ahmed, George Karypis
To account for the structure in the node representations, Mazi generates node representations at each level of the hierarchy, and utilizes them to influence the node representations of the original graph.
1 code implementation • 10 Dec 2021 • Costas Mavromatis, Prasanna Lakkur Subramanyam, Vassilis N. Ioannidis, Soji Adeshina, Phillip R. Howard, Tetiana Grinberg, Nagib Hakim, George Karypis
The first computes a textual representation of a given question, the second combines it with the entity embeddings for entities involved in the question, and the third generates question-specific time embeddings.
Ranked #1 on Question Answering on CronQuestions
1 code implementation • 27 Nov 2021 • Fabio Broccatelli, Richard Trager, Michael Reutlinger, George Karypis, Mufei Li
In this work, we benchmark a variety of single- and multi-task graph neural network (GNN) models against lower-bar and higher-bar traditional machine learning approaches employing human engineered molecular features.
1 code implementation • ACL 2022 • Yanda Chen, Ruiqi Zhong, Sheng Zha, George Karypis, He He
The goal of meta-learning is to learn to adapt to a new task with only a few labeled examples.
no code implementations • 12 Oct 2021 • Cole Hawkins, Vassilis N. Ioannidis, Soji Adeshina, George Karypis
Consistency training is a popular method to improve deep learning models in computer vision and natural language processing.
no code implementations • EMNLP (sustainlp) 2021 • Haoyu He, Xingjian Shi, Jonas Mueller, Zha Sheng, Mu Li, George Karypis
We aim to identify how different components in the KD pipeline affect the resulting performance and how much the optimal KD pipeline varies across different datasets/tasks, such as the data augmentation policy, the loss function, and the intermediate representation for transferring the knowledge between teacher and student.
1 code implementation • 14 Sep 2021 • Costas Mavromatis, George Karypis
Many real-world graphs involve different types of nodes and relations between nodes, being heterogeneous by nature.
no code implementations • 13 Sep 2021 • Agoritsa Polyzou, Maria Kalantzi, George Karypis
Course selection is challenging for students in higher educational institutions.
no code implementations • 9 Sep 2021 • Athanasios N. Nikolakopoulos, Xia Ning, Christian Desrosiers, George Karypis
Collaborative recommendation approaches based on nearest-neighbors are still highly popular today due to their simplicity, their efficiency, and their ability to produce accurate and personalized recommendations.
no code implementations • 31 Aug 2021 • Maria Kalantzi, George Karypis
GNNs compute node representations by taking into account the topology of the node's ego-network and the features of the ego-network's nodes.
1 code implementation • 25 Aug 2021 • Zonghan Wu, Da Zheng, Shirui Pan, Quan Gan, Guodong Long, George Karypis
This paper aims to unify spatial dependency and temporal dependency in a non-Euclidean space while capturing the inner spatial-temporal dependencies for traffic data.
1 code implementation • 27 Jun 2021 • Mufei Li, Jinjing Zhou, Jiajing Hu, Wenxuan Fan, Yangkang Zhang, Yaxin Gu, George Karypis
Graph neural networks (GNNs) constitute a class of deep learning methods for graph data.
no code implementations • 3 May 2021 • Saurav Manchanda, Da Zheng, George Karypis
To address this question, we propose our GCN framework 'Deep Heterogeneous Graph Convolutional Network (DHGCN)', which takes advantage of the schema of a heterogeneous graph and uses a hierarchical approach to effectively utilize information many hops away.
1 code implementation • 4 Mar 2021 • Shalini Pandey, George Karypis, Jaideep Srivasatava
The interaction modeling layer is responsible for updating the embedding of a user and an item when the user interacts with the item.
no code implementations • 4 Mar 2021 • Linfeng Liu, Hoan Nguyen, George Karypis, Srinivasan Sengamedu
Learning from source code usually requires a large amount of labeled data.
1 code implementation • 27 Feb 2021 • Yixin Liu, Zhao Li, Shirui Pan, Chen Gong, Chuan Zhou, George Karypis
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair, which can capture the relationship between each node and its neighboring substructure in an unsupervised way.
1 code implementation • IEEE International Conference on Data Mining Workshops (ICDM Workshops) 2021 • Shalini Pandey, Andrew Lan, George Karypis, Jaideep Srivastava
The projection operation learns to estimate future embedding of students and threads.
no code implementations • 19 Jan 2021 • Balasubramaniam Srinivasan, Da Zheng, George Karypis
In this work, we exploit the incidence structure to develop a hypergraph neural network to learn provably expressive representations of variable sized hyperedges which preserve local-isomorphism in the line graph of the hypergraph, while also being invariant to permutations of its constituent vertices.
no code implementations • 16 Jan 2021 • Shalini Pandey, George Karypis, Jaideep Srivastava
Recent release of large-scale student performance dataset \cite{choi2019ednet} motivates the analysis of performance of deep learning approaches that have been proposed to solve KT.
no code implementations • 10 Jan 2021 • Shalini Pandey, Andrew Lan, George Karypis, Jaideep Srivastava
The projection operation learns to estimate future embedding of students and threads.
no code implementations • 15 Dec 2020 • Saurav Manchanda, Mohit Sharma, George Karypis
Slot-filling refers to the task of annotating individual terms in a query with the corresponding intended product characteristics (product type, brand, gender, size, color, etc.).
1 code implementation • 11 Oct 2020 • Da Zheng, Chao Ma, Minjie Wang, Jinjing Zhou, Qidong Su, Xiang Song, Quan Gan, Zheng Zhang, George Karypis
To minimize the overheads associated with distributed computations, DistDGL uses a high-quality and light-weight min-cut graph partitioning algorithm along with multiple balancing constraints.
no code implementations • 28 Sep 2020 • Vassilis N. Ioannidis, Da Zheng, George Karypis
Learning unsupervised node embeddings facilitates several downstream tasks such as node classification and link prediction.
1 code implementation • 26 Sep 2020 • Zeren Shui, George Karypis
As they carry great potential for modeling complex interactions, graph neural network (GNN)-based methods have been widely used to predict quantum mechanical properties of molecules.
Ranked #5 on Formation Energy on QM9
2 code implementations • 15 Sep 2020 • Costas Mavromatis, George Karypis
Motivated by this observation, we propose a graph representation learning method called Graph InfoClust (GIC), that seeks to additionally capture cluster-level information content.
Ranked #2 on Link Prediction on Citeseer
no code implementations • AACL (knlp) 2020 • Colby Wise, Vassilis N. Ioannidis, Miguel Romero Calvo, Xiang Song, George Price, Ninad Kulkarni, Ryan Brand, Parminder Bhatia, George Karypis
Finally, we propose a document similarity engine that leverages low-dimensional graph embeddings from the CKG with semantic embeddings for similar article retrieval.
1 code implementation • 20 Jul 2020 • Vassilis N. Ioannidis, Da Zheng, George Karypis
This paper proposes an inductive RGCN for learning informative relation embeddings even in the few-shot learning regime.
1 code implementation • 20 Jul 2020 • Vassilis N. Ioannidis, Da Zheng, George Karypis
Learning unsupervised node embeddings facilitates several downstream tasks such as node classification and link prediction.
no code implementations • 21 May 2020 • Xiangxiang Zeng, Xiang Song, Tengfei Ma, Xiaoqin Pan, Yadi Zhou, Yuan Hou, Zheng Zhang, George Karypis, Feixiong Cheng
While this study, by no means recommends specific drugs, it demonstrates a powerful deep learning methodology to prioritize existing drugs for further investigation, which holds the potential of accelerating therapeutic development for COVID-19.
1 code implementation • 18 Apr 2020 • Da Zheng, Xiang Song, Chao Ma, Zeyuan Tan, Zihao Ye, Jin Dong, Hao Xiong, Zheng Zhang, George Karypis
Experiments on knowledge graphs consisting of over 86M nodes and 338M edges show that DGL-KE can compute embeddings in 100 minutes on an EC2 instance with 8 GPUs and 30 minutes on an EC2 cluster with 4 machines with 48 cores/machine.
Distributed, Parallel, and Cluster Computing
no code implementations • 9 Mar 2020 • Sara Morsy, George Karypis
Grade prediction for future courses not yet taken by students is important as it can help them and their advisers during the process of course selection as well as for designing personalized degree plans and modifying them based on their performance.
1 code implementation • 26 Nov 2019 • Saurav Manchanda, George Karypis
Experiments on the credit attribution task on a variety of datasets show that the sentence class labels generated by CAWA outperform the competing approaches.
no code implementations • 9 Sep 2019 • Athanasios N. Nikolakopoulos, George Karypis
Item-based models are among the most popular collaborative filtering approaches for building recommender systems.
7 code implementations • 3 Sep 2019 • Minjie Wang, Da Zheng, Zihao Ye, Quan Gan, Mufei Li, Xiang Song, Jinjing Zhou, Chao Ma, Lingfan Yu, Yu Gai, Tianjun Xiao, Tong He, George Karypis, Jinyang Li, Zheng Zhang
Advancing research in the emerging field of deep graph learning requires new tools to support tensor computation over graphs.
Ranked #34 on Node Classification on Cora
1 code implementation • 22 Aug 2019 • Saurav Manchanda, Mohit Sharma, George Karypis
Moreover, for the tasks of identifying the important terms in a query and for predicting the additional terms that represent product intent, experiments illustrate that our approaches outperform the non-contextual baselines.
8 code implementations • 16 Jul 2019 • Shalini Pandey, George Karypis
Knowledge tracing is the task of modeling each student's mastery of knowledge concepts (KCs) as (s)he engages with a sequence of learning activities.
no code implementations • 22 Apr 2019 • Sara Morsy, George Karypis
To this end, we propose two different grade-aware course recommendation approaches to recommend to each student his/her optimal set of courses.
no code implementations • 22 Apr 2019 • Mohit Sharma, F. Maxwell Harper, George Karypis
Our analysis of these ratings shows that though the majority of the users provide the average of the ratings on a set's constituent items as the rating on the set, there exists a significant number of users that tend to consistently either under- or over-rate the sets.
no code implementations • 22 Apr 2019 • Sara Morsy, George Karypis
Grade prediction for future courses not yet taken by students is important as it can help them and their advisers during the process of course selection as well as for designing personalized degree plans and modifying them based on their performance.
no code implementations • 22 Apr 2019 • Mohit Sharma, Jiayu Zhou, Junling Hu, George Karypis
The user personalized non-collaborative methods based on item features can be used to address this item cold-start problem.
1 code implementation • 22 Apr 2019 • Mohit Sharma, George Karypis
In this work, we show that the skewed distribution of ratings in the user-item rating matrix of real-world datasets affects the accuracy of matrix-completion-based approaches.
no code implementations • 14 Apr 2019 • Saurav Manchanda, George Karypis
Word2Vec's Skip Gram model is the current state-of-the-art approach for estimating the distributed representation of words.
no code implementations • 14 Apr 2019 • Saurav Manchanda, George Karypis
Segmenting text into semantically coherent segments is an important task with applications in information retrieval and text summarization.
1 code implementation • 20 Feb 2018 • Zhuliu Li, Raphael Petegrosso, Shaden Smith, David Sterling, George Karypis, Rui Kuang
In this paper, we generalize a widely used label propagation model to the normalized tensor product graph, and propose an optimization formulation and a scalable Low-rank Tensor-based Label Propagation algorithm (LowrankTLP) to infer multi-relations for two learning tasks, hyperlink prediction and multiple graph alignment.