Search Results for author: Nicholas D. Sidiropoulos

Found 43 papers, 6 papers with code

Revisiting Deep Generalized Canonical Correlation Analysis

1 code implementation20 Dec 2023 Paris A. Karakasis, Nicholas D. Sidiropoulos

Others overload the problem by also seeking to reveal what is not common among the views -- i. e., the private components that are needed to fully reconstruct each view.

On High-dimensional and Low-rank Tensor Bandits

no code implementations6 May 2023 Chengshuai Shi, Cong Shen, Nicholas D. Sidiropoulos

To address this limitation, this work studies a general tensor bandits model, where actions and system parameters are represented by tensors as opposed to vectors, and we particularly focus on the case that the unknown system tensor is low-rank.

Recommendation Systems Vocal Bursts Intensity Prediction

Multisubject Task-Related fMRI Data Processing via a Two-Stage Generalized Canonical Correlation Analysis

no code implementations16 Oct 2022 Paris A. Karakasis, Athanasios P. Liavas, Nicholas D. Sidiropoulos, Panagiotis G. Simos, Efrosini Papadaki

Task-related fMRI data processing aims to determine which brain areas are activated when a specific task is performed and is usually based on the Blood Oxygen Level Dependent (BOLD) signal.

Minimizing low-rank models of high-order tensors: Hardness, span, tight relaxation, and applications

no code implementations16 Oct 2022 Nicholas D. Sidiropoulos, Paris Karakasis, Aritra Konar

We show that this fundamental tensor problem is NP-hard for any tensor rank higher than one, and polynomial-time solvable in the rank-one case.

Graph Mining Recommendation Systems +1

Learning Multivariate CDFs and Copulas using Tensor Factorization

no code implementations13 Oct 2022 Magda Amiridi, Nicholas D. Sidiropoulos

Learning the multivariate distribution of data is a core challenge in statistics and machine learning.

Imputation Uncertainty Quantification

Low-rank Characteristic Tensor Density Estimation Part II: Compression and Latent Density Estimation

1 code implementation20 Jun 2021 Magda Amiridi, Nikos Kargas, Nicholas D. Sidiropoulos

Learning generative probabilistic models is a core problem in machine learning, which presents significant challenges due to the curse of dimensionality.

Anomaly Detection Density Estimation +1

Probabilistic Simplex Component Analysis

no code implementations18 Mar 2021 Ruiyuan Wu, Wing-Kin Ma, Yuening Li, Anthony Man-Cho So, Nicholas D. Sidiropoulos

PRISM uses a simple probabilistic model, namely, uniform simplex data distribution and additive Gaussian noise, and it carries out inference by maximum likelihood.

Hyperspectral Unmixing Variational Inference

STELAR: Spatio-temporal Tensor Factorization with Latent Epidemiological Regularization

no code implementations8 Dec 2020 Nikos Kargas, Cheng Qian, Nicholas D. Sidiropoulos, Cao Xiao, Lucas M. Glass, Jimeng Sun

Accurate prediction of the transmission of epidemic diseases such as COVID-19 is crucial for implementing effective mitigation measures.

Attribute

GAGE: Geometry Preserving Attributed Graph Embeddings

no code implementations3 Nov 2020 Charilaos I. Kanatsoulis, Nicholas D. Sidiropoulos

Various real-world networks include information about both node connectivity and certain node attributes, in the form of features or time-series data.

Attribute Link Prediction +4

Information-theoretic Feature Selection via Tensor Decomposition and Submodularity

no code implementations30 Oct 2020 Magda Amiridi, Nikos Kargas, Nicholas D. Sidiropoulos

By indirectly aiming to predict the latent variable of the naive Bayes model instead of the original target variable, it is possible to formulate the feature selection problem as maximization of a monotone submodular function subject to a cardinality constraint - which can be tackled using a greedy algorithm that comes with performance guarantees.

Combinatorial Optimization feature selection +1

PHASED: Phase-Aware Submodularity-Based Energy Disaggregation

no code implementations1 Oct 2020 Faisal M. Almutairi, Aritra Konar, Ahmed S. Zamzam, Nicholas D. Sidiropoulos

Energy disaggregation is the task of discerning the energy consumption of individual appliances from aggregated measurements, which holds promise for understanding and reducing energy usage.

Mining Large Quasi-cliques with Quality Guarantees from Vertex Neighborhoods

no code implementations18 Aug 2020 Aritra Konar, Nicholas D. Sidiropoulos

In this work, we formally establish that two recurring characteristics of real-world graphs, namely heavy-tailed degree distributions and large clustering coefficients, imply the existence of substantially large vertex neighborhoods with high edge-density.

Clustering Graph Mining

GRATE: Granular Recovery of Aggregated Tensor Data by Example

no code implementations27 Mar 2020 Ahmed S. Zamzam, Bo Yang, Nicholas D. Sidiropoulos

In this paper, we address the challenge of recovering an accurate breakdown of aggregated tensor data using disaggregation examples.

Total Energy

Generalized Canonical Correlation Analysis: A Subspace Intersection Approach

no code implementations25 Mar 2020 Mikael Sørensen, Charilaos I. Kanatsoulis, Nicholas D. Sidiropoulos

It is shown that from a linear algebra point of view, GCCA is tantamount to subspace intersection; and conditions under which the common subspace of the different views is identifiable are provided.

PREMA: Principled Tensor Data Recovery from Multiple Aggregated Views

2 code implementations26 Oct 2019 Faisal M. Almutairi, Charilaos I. Kanatsoulis, Nicholas D. Sidiropoulos

The goal of this paper is to reconstruct finer-scale data from multiple coarse views, aggregated over different (subsets of) dimensions.

REP: Predicting the Time-Course of Drug Sensitivity

no code implementations27 Jul 2019 Cheng Qian, Amin Emad, Nicholas D. Sidiropoulos

Time-course gene expression data is a rich source of information that can be used to unravel these complex processes, identify biomarkers of drug sensitivity and predict the response to a drug.

Drug Response Prediction

Nonlinear System Identification via Tensor Completion

no code implementations13 Jun 2019 Nikos Kargas, Nicholas D. Sidiropoulos

Deep neural networks are currently the most popular method for learning to mimic the input-output relationship of a general nonlinear system, as they have proven to be very effective in approximating complex highly nonlinear functions.

Machine Learning in the Air

no code implementations28 Apr 2019 Deniz Gunduz, Paul de Kerret, Nicholas D. Sidiropoulos, David Gesbert, Chandra Murthy, Mihaela van der Schaar

Thanks to the recent advances in processing speed and data acquisition and storage, machine learning (ML) is penetrating every facet of our lives, and transforming research in many areas in a fundamental manner.

BIG-bench Machine Learning

Learning Mixtures of Smooth Product Distributions: Identifiability and Algorithm

no code implementations2 Apr 2019 Nikos Kargas, Nicholas D. Sidiropoulos

We study the problem of learning a mixture model of non-parametric product distributions.

Energy Storage Management via Deep Q-Networks

no code implementations26 Mar 2019 Ahmed S. Zamzam, Bo Yang, Nicholas D. Sidiropoulos

Energy storage devices represent environmentally friendly candidates to cope with volatile renewable energy generation.

Management Reinforcement Learning (RL)

Learning Nonlinear Mixtures: Identifiability and Algorithm

no code implementations6 Jan 2019 Bo Yang, Xiao Fu, Nicholas D. Sidiropoulos, Kejun Huang

Linear mixture models have proven very useful in a plethora of applications, e. g., topic modeling, clustering, and source separation.

Clustering

Structured SUMCOR Multiview Canonical Correlation Analysis for Large-Scale Data

no code implementations24 Apr 2018 Charilaos I. Kanatsoulis, Xiao Fu, Nicholas D. Sidiropoulos, Mingyi Hong

In this work, we propose a new computational framework for large-scale SUMCOR GCCA that can easily incorporate a suite of structural regularizers which are frequently used in data analytics.

Hyperspectral Super-Resolution: A Coupled Tensor Factorization Approach

no code implementations15 Apr 2018 Charilaos I. Kanatsoulis, Xiao Fu, Nicholas D. Sidiropoulos, Wing-Kin Ma

Third, the majority of the existing methods assume that there are known (or easily estimated) degradation operators applied to the SRI to form the corresponding HSI and MSI--which is hardly the case in practice.

Super-Resolution

Nonnegative Matrix Factorization for Signal and Data Analytics: Identifiability, Algorithms, and Applications

no code implementations3 Mar 2018 Xiao Fu, Kejun Huang, Nicholas D. Sidiropoulos, Wing-Kin Ma

Perhaps a bit surprisingly, the understanding to its model identifiability---the major reason behind the interpretability in many applications such as topic mining and hyperspectral imaging---had been rather limited until recent years.

Learning Hidden Markov Models from Pairwise Co-occurrences with Application to Topic Modeling

no code implementations ICML 2018 Kejun Huang, Xiao Fu, Nicholas D. Sidiropoulos

We present a new algorithm for identifying the transition and emission probabilities of a hidden Markov model (HMM) from the emitted data.

Tensors, Learning, and 'Kolmogorov Extension' for Finite-alphabet Random Vectors

no code implementations1 Dec 2017 Nikos Kargas, Nicholas D. Sidiropoulos, Xiao Fu

This paper shows, perhaps surprisingly, that if the joint PMF of any three variables can be estimated, then the joint PMF of all the variables can be provably recovered under relatively mild conditions.

Movie Recommendation

Kullback-Leibler Principal Component for Tensors is not NP-hard

no code implementations21 Nov 2017 Kejun Huang, Nicholas D. Sidiropoulos

We study the problem of nonnegative rank-one approximation of a nonnegative tensor, and show that the globally optimal solution that minimizes the generalized Kullback-Leibler divergence can be efficiently obtained, i. e., it is not NP-hard.

On Convergence of Epanechnikov Mean Shift

no code implementations20 Nov 2017 Kejun Huang, Xiao Fu, Nicholas D. Sidiropoulos

However, since the procedure involves non-smooth kernel density functions, the convergence behavior of Epanechnikov mean shift lacks theoretical support as of this writing---most of the existing analyses are based on smooth functions and thus cannot be applied to Epanechnikov Mean Shift.

Clustering

On Identifiability of Nonnegative Matrix Factorization

no code implementations2 Sep 2017 Xiao Fu, Kejun Huang, Nicholas D. Sidiropoulos

In this letter, we propose a new identification criterion that guarantees the recovery of the low-rank latent factors in the nonnegative matrix factorization (NMF) model, under mild conditions.

Completing a joint PMF from projections: a low-rank coupled tensor factorization approach

no code implementations16 Feb 2017 Nikos Kargas, Nicholas D. Sidiropoulos

There has recently been considerable interest in completing a low-rank matrix or tensor given only a small fraction (or few linear combinations) of its entries.

Recommendation Systems

Anchor-Free Correlated Topic Modeling: Identifiability and Algorithm

no code implementations NeurIPS 2016 Kejun Huang, Xiao Fu, Nicholas D. Sidiropoulos

In topic modeling, many algorithms that guarantee identifiability of the topics have been developed under the premise that there exist anchor words -- i. e., words that only appear (with positive probability) in one topic.

Clustering

Towards K-means-friendly Spaces: Simultaneous Deep Learning and Clustering

10 code implementations ICML 2017 Bo Yang, Xiao Fu, Nicholas D. Sidiropoulos, Mingyi Hong

To recover the `clustering-friendly' latent representations and to better cluster the data, we propose a joint DR and K-means clustering approach in which DR is accomplished via learning a deep neural network (DNN).

Clustering Dimensionality Reduction

Tensor Decomposition for Signal Processing and Machine Learning

no code implementations6 Jul 2016 Nicholas D. Sidiropoulos, Lieven De Lathauwer, Xiao Fu, Kejun Huang, Evangelos E. Papalexakis, Christos Faloutsos

Tensors or {\em multi-way arrays} are functions of three or more indices $(i, j, k,\cdots)$ -- similar to matrices (two-way arrays), which are functions of two indices $(r, c)$ for (row, column).

BIG-bench Machine Learning Collaborative Filtering +1

Scalable and Flexible Multiview MAX-VAR Canonical Correlation Analysis

no code implementations31 May 2016 Xiao Fu, Kejun Huang, Mingyi Hong, Nicholas D. Sidiropoulos, Anthony Man-Cho So

Generalized canonical correlation analysis (GCCA) aims at finding latent low-dimensional common structure from multiple views (feature vectors in different domains) of the same entities.

Learning From Hidden Traits: Joint Factor Analysis and Latent Clustering

no code implementations21 May 2016 Bo Yang, Xiao Fu, Nicholas D. Sidiropoulos

Dimensionality reduction is usually performed in a preprocessing stage that is separate from subsequent data analysis, such as clustering or classification.

Clustering Dimensionality Reduction

Joint Tensor Factorization and Outlying Slab Suppression with Applications

no code implementations16 Jul 2015 Xiao Fu, Kejun Huang, Wing-Kin Ma, Nicholas D. Sidiropoulos, Rasmus Bro

Convergence of the proposed algorithm is also easy to analyze under the framework of alternating optimization and its variants.

Speech Separation

A Flexible and Efficient Algorithmic Framework for Constrained Matrix and Tensor Factorization

no code implementations13 Jun 2015 Kejun Huang, Nicholas D. Sidiropoulos, Athanasios P. Liavas

We propose a general algorithmic framework for constrained matrix and tensor factorization, which is widely used in signal processing and machine learning.

Dictionary Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.