1 code implementation • 24 Nov 2024 • Paimon Goulart, Evangelos E. Papalexakis
Large Language Models (LLMs) have demonstrated the ability to solve complex tasks through In-Context Learning (ICL), where models learn from a few input-output pairs without explicit fine-tuning.
1 code implementation • 8 Oct 2024 • Shaan Pakala, Bryce Graw, Dawon Ahn, Tam Dinh, Mehnaz Tabassum Mahin, Vassilis Tsotras, Jia Chen, Evangelos E. Papalexakis
Hyperparameter optimization is an essential component in many data science pipelines and typically entails exhaustive time and resource-consuming computations in order to explore the combinatorial search space.
no code implementations • 12 Jul 2024 • Yunshu Wu, Yingtao Luo, Xianghao Kong, Evangelos E. Papalexakis, Greg Ver Steeg
This can become problematic for all sampling methods, especially when we move to parallel sampling which requires us to initialize and update the entire sample trajectory of dynamics in parallel, leading to many OOD evaluations.
no code implementations • 25 Jun 2024 • Yiran Luo, Het Patel, Yu Fu, Dawon Ahn, Jia Chen, Yue Dong, Evangelos E. Papalexakis
Recent research has shown that pruning large-scale language models for inference is an effective approach to improving model efficiency, significantly reducing model weights with minimal impact on performance.
1 code implementation • 12 Mar 2024 • Zubair Qazi, William Shiao, Evangelos E. Papalexakis
As natural language models like ChatGPT become increasingly prevalent in applications and services, the need for robust and accurate methods to detect their output is of paramount importance.
no code implementations • 1 Dec 2023 • Biqian Cheng, Evangelos E. Papalexakis, Jia Chen
Canonical Correlation Analysis (CCA) has been widely applied to jointly embed multiple views of data in a maximally correlated latent space.
no code implementations • 12 Jun 2023 • William Shiao, Uday Singh Saini, Yozen Liu, Tong Zhao, Neil Shah, Evangelos E. Papalexakis
CARL-G is adaptable to different clustering methods and CVIs, and we show that with the right choice of clustering method and CVI, CARL-G outperforms node classification baselines on 4/5 datasets with up to a 79x training speedup compared to the best-performing baseline.
1 code implementation • 25 Nov 2022 • William Shiao, Zhichun Guo, Tong Zhao, Evangelos E. Papalexakis, Yozen Liu, Neil Shah
In this work, we extensively evaluate the performance of existing non-contrastive methods for link prediction in both transductive and inductive settings.
1 code implementation • 19 Jun 2022 • William Shiao, Evangelos E. Papalexakis
We can then train a specialized single-use regression model on a synthetic set of tensors engineered to match a given input tensor and use that to estimate the canonical rank of the tensor - all without computing the expensive CPD.
no code implementations • 25 May 2022 • Stephanie Milani, Zhicheng Zhang, Nicholay Topin, Zheyuan Ryan Shi, Charles Kamhoua, Evangelos E. Papalexakis, Fei Fang
The first algorithm, IVIPER, extends VIPER, a recent method for single-agent interpretable RL, to the multi-agent setting.
Multi-agent Reinforcement Learning reinforcement-learning +2
no code implementations • 15 Aug 2021 • Sara Abdali, M. Alex O. Vasilescu, Evangelos E. Papalexakis
Generative neural network architectures such as GANs, may be used to generate synthetic instances to compensate for the lack of real data.
1 code implementation • 2 Jul 2021 • Uday Singh Saini, Pravallika Devineni, Evangelos E. Papalexakis
These experiments reveal that as we go deeper in a network, inputs tend to have an increasing affinity to other inputs of the same class.
no code implementations • 15 Feb 2021 • Sara Abdali, Neil Shah, Evangelos E. Papalexakis
In this work, we introduce a novel generalization of graphs i. e., K-Nearest Hyperplanes graph (KNH) where the nodes are defined by higher order Euclidean subspaces for multi-view modeling of the nodes.
no code implementations • 15 Feb 2021 • Sara Abdali, Rutuja Gurav, Siddharth Menon, Daniel Fonseca, Negin Entezari, Neil Shah, Evangelos E. Papalexakis
To capture this overall look, we take screenshots of news articles served by either misinformative or trustworthy web domains and leverage a tensor decomposition based semi-supervised classification technique.
no code implementations • 23 Dec 2020 • Uday Singh Saini, Evangelos E. Papalexakis
In this work, we propose a framework to categorize the concepts a network learns based on the way it clusters a set of input examples, clusters neurons based on the examples they activate for, and input features all in the same latent space.
no code implementations • 14 Nov 2020 • Risul Islam, Md Omar Faruk Rokon, Evangelos E. Papalexakis, Michalis Faloutsos
How can we expand the tensor decomposition to reveal a hierarchical structure of the multi-modal data in a self-adaptive way?
1 code implementation • 14 Nov 2020 • Risul Islam, Md Omar Faruk Rokon, Evangelos E. Papalexakis, Michalis Faloutsos
Our approach and our platform constitute an important step towards detecting activities of interest from a forum in an unsupervised learning fashion in practice.
no code implementations • 17 Aug 2020 • Jia Chen, Evangelos E. Papalexakis
Node embeddings have been attracting increasing attention during the past years.
1 code implementation • 8 May 2020 • Sara Abdali, Neil Shah, Evangelos E. Papalexakis
Distinguishing between misinformation and real information is one of the most challenging problems in today's interconnected world.
no code implementations • 18 Feb 2020 • Negin Entezari, Evangelos E. Papalexakis
Subtle and imperceptible perturbations of the data are able to change the result of deep neural networks.
no code implementations • 8 Jan 2020 • Joobin Gharibshah, Evangelos E. Papalexakis, Michalis Faloutsos
How can we extract useful information from a security forum?
no code implementations • 19 Dec 2019 • Ravdeep Pasricha, Ekta Gujral, Evangelos E. Papalexakis
Data collected at very frequent intervals is usually extremely sparse and has no structure that is exploitable by modern tensor decomposition algorithms.
no code implementations • 18 Nov 2018 • Georgios Tsitsikas, Evangelos E. Papalexakis
Among the most popular methods is a class of algorithms that leverages compression in order to reduce the size of the tensor and potentially parallelize computations.
1 code implementation • 5 Nov 2018 • Chin-Chia Michael Yeh, Yan Zhu, Evangelos E. Papalexakis, Abdullah Mueen, Eamonn Keogh
Since its introduction, unsupervised representation learning has attracted a lot of attention from the research community, as it is demonstrated to be highly effective and easy-to-apply in tasks such as dimension reduction, clustering, visualization, information retrieval, and semi-supervised learning.
no code implementations • 23 Aug 2018 • Niluthpol Chowdhury Mithun, Rameswar Panda, Evangelos E. Papalexakis, Amit K. Roy-Chowdhury
Inspired by the recent success of webly supervised learning in deep neural networks, we capitalize on readily-available web images with noisy annotations to learn robust image-text joint representation.
no code implementations • 3 Jul 2018 • Ekta Gujral, Ravdeep Pasricha, Tianxiong Yang, Evangelos E. Papalexakis
Tensor decompositions are powerful tools for large data analytics as they jointly model multiple aspects of data into one framework and enable the discovery of the latent structures and higher-order correlations within the data.
no code implementations • 30 Jun 2018 • Sanaz Bahargam, Evangelos E. Papalexakis
In this paper, we propose a novel time-evolving topic discovery method which, in addition to the extracted topics, is able to identify the evolution of that topic over time, as well as the level of difficulty of that topic, as it is inferred by the level of expertise of its main contributors.
no code implementations • 6 Jun 2018 • Uday Singh Saini, Evangelos E. Papalexakis
Furthermore, can we characterize a given deep neural network based on it's observed behavior on different inputs?
no code implementations • 3 May 2018 • Saba A. Al-Sayouri, Danai Koutra, Evangelos E. Papalexakis, Sarah S. Lam
Representation learning algorithms aim to preserve local and global network structure by identifying node neighborhood notions.
no code implementations • 3 May 2018 • Saba A. Al-Sayouri, Ekta Gujral, Danai Koutra, Evangelos E. Papalexakis, Sarah S. Lam
Contrary to baseline methods, which generally learn explicit graph representations by solely using an adjacency matrix, t-PINE avails a multi-view information graph, the adjacency matrix represents the first view, and a nearest neighbor adjacency, computed over the node features, is the second view, in order to learn explicit and implicit node representations, using the Canonical Polyadic (a. k. a.
1 code implementation • 25 Apr 2018 • Ravdeep Pasricha, Ekta Gujral, Evangelos E. Papalexakis
In this paper, we define "concept" and "concept drift" in the context of streaming tensor decomposition, as the manifestation of the variability of latent concepts throughout the stream.
no code implementations • 24 Apr 2018 • Gisel Bastidas Guacho, Sara Abdali, Neil Shah, Evangelos E. Papalexakis
Most existing works on this topic focus on manual feature extraction and supervised classification models leveraging a large number of labeled (fake or real) articles.
no code implementations • 13 Apr 2018 • Joobin Gharibshah, Evangelos E. Papalexakis, Michalis Faloutsos
We propose RIPEx, a systematic approach to identify and label IP addresses in security forums by utilizing a cross-forum learning method.
no code implementations • 14 Mar 2018 • Ioakeim Perros, Evangelos E. Papalexakis, Haesun Park, Richard Vuduc, Xiaowei Yan, Christopher deFilippi, Walter F. Stewart, Jimeng Sun
We propose two variants, SUSTain_M and SUSTain_T, to handle both matrix and tensor inputs, respectively.
1 code implementation • 12 Mar 2018 • Ardavan Afshar, Ioakeim Perros, Evangelos E. Papalexakis, Elizabeth Searles, Joyce Ho, Jimeng Sun
To tackle these challenges, we propose a {\it CO}nstrained {\it PA}RAFAC2 (COPA) method, which carefully incorporates optimization constraints such as temporal smoothness, sparsity, and non-negativity in the resulting factors.
no code implementations • ICLR 2018 • Chin-Chia Michael Yeh, Yan Zhu, Evangelos E. Papalexakis, Abdullah Mueen, Eamonn Keogh
By reformulating the representation learning problem as a neighbor reconstruction problem, domain knowledge can be easily incorporated with appropriate definition of similarity or distance between objects.
1 code implementation • 31 Oct 2017 • Panos P. Markopoulos, Dimitris G. Chachlakis, Evangelos E. Papalexakis
We study rank-1 {L1-norm-based TUCKER2} (L1-TUCKER2) decomposition of 3-way tensors, treated as a collection of $N$ $D \times M$ matrices that are to be jointly decomposed.
no code implementations • 4 Sep 2017 • Ishmam Zabir, Evangelos E. Papalexakis
Previously, we have proposed an automated tensor mining method which leverages a well-known quality heuristic from the field of Chemometrics, the Core Consistency Diagnostic (CORCONDIA), in order to automatically determine the rank for the PARAFAC decomposition.
no code implementations • 3 Sep 2017 • Ekta Gujral, Ravdeep Pasricha, Evangelos E. Papalexakis
In this paper we introduce SaMbaTen, a Sampling-based Batch Incremental Tensor Decomposition algorithm, which incrementally maintains the decomposition given new updates to the tensor dataset.
no code implementations • 13 Mar 2017 • Ioakeim Perros, Evangelos E. Papalexakis, Fei Wang, Richard Vuduc, Elizabeth Searles, Michael Thompson, Jimeng Sun
For example, when modeling medical features across a set of patients, the number and duration of treatments may vary widely in time, meaning there is no meaningful way to align their clinical records across time points for analysis purposes.
no code implementations • 6 Jul 2016 • Nicholas D. Sidiropoulos, Lieven De Lathauwer, Xiao Fu, Kejun Huang, Evangelos E. Papalexakis, Christos Faloutsos
Tensors or {\em multi-way arrays} are functions of three or more indices $(i, j, k,\cdots)$ -- similar to matrices (two-way arrays), which are functions of two indices $(r, c)$ for (row, column).
no code implementations • 11 Mar 2015 • Evangelos E. Papalexakis
A popular tool for unsupervised modelling and mining multi-aspect data is tensor decomposition.