You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

1 code implementation • 20 Jun 2021 • Magda Amiridi, Nikos Kargas, Nicholas D. Sidiropoulos

Learning generative probabilistic models is a core problem in machine learning, which presents significant challenges due to the curse of dimensionality.

no code implementations • 18 Mar 2021 • Ruiyuan Wu, Wing-Kin Ma, Yuening Li, Anthony Man-Cho So, Nicholas D. Sidiropoulos

PRISM uses a simple probabilistic model, namely, uniform simplex data distribution and additive Gaussian noise, and it carries out inference by maximum likelihood.

no code implementations • 20 Dec 2020 • Faisal M. Almutairi, Yunlong Wang, Dong Wang, Emily Zhao, Nicholas D. Sidiropoulos

In many applications, the categories of items exhibit a hierarchical tree structure.

no code implementations • 8 Dec 2020 • Nikos Kargas, Cheng Qian, Nicholas D. Sidiropoulos, Cao Xiao, Lucas M. Glass, Jimeng Sun

Accurate prediction of the transmission of epidemic diseases such as COVID-19 is crucial for implementing effective mitigation measures.

no code implementations • 3 Nov 2020 • Charilaos I. Kanatsoulis, Nicholas D. Sidiropoulos

Node representation learning is the task of extracting concise and informative feature embeddings of certain entities that are connected in a network.

no code implementations • 30 Oct 2020 • Magda Amiridi, Nikos Kargas, Nicholas D. Sidiropoulos

By indirectly aiming to predict the latent variable of the naive Bayes model instead of the original target variable, it is possible to formulate the feature selection problem as maximization of a monotone submodular function subject to a cardinality constraint - which can be tackled using a greedy algorithm that comes with performance guarantees.

no code implementations • 22 Oct 2020 • Charilaos I. Kanatsoulis, Nicholas D. Sidiropoulos

Knowledge graphs (KGs) are powerful tools that codify relational behaviour between entities in knowledge bases.

no code implementations • 1 Oct 2020 • Faisal M. Almutairi, Aritra Konar, Ahmed S. Zamzam, Nicholas D. Sidiropoulos

Energy disaggregation is the task of discerning the energy consumption of individual appliances from aggregated measurements, which holds promise for understanding and reducing energy usage.

no code implementations • 27 Aug 2020 • Magda Amiridi, Nikos Kargas, Nicholas D. Sidiropoulos

Any multivariate density can be represented by its characteristic function, via the Fourier transform.

no code implementations • 18 Aug 2020 • Aritra Konar, Nicholas D. Sidiropoulos

In this work, we formally establish that two recurring characteristics of real-world graphs, namely heavy-tailed degree distributions and large clustering coefficients, imply the existence of substantially large vertex neighborhoods with high edge-density.

no code implementations • 27 Mar 2020 • Ahmed S. Zamzam, Bo Yang, Nicholas D. Sidiropoulos

In this paper, we address the challenge of recovering an accurate breakdown of aggregated tensor data using disaggregation examples.

no code implementations • 25 Mar 2020 • Mikael Sørensen, Charilaos I. Kanatsoulis, Nicholas D. Sidiropoulos

It is shown that from a linear algebra point of view, GCCA is tantamount to subspace intersection; and conditions under which the common subspace of the different views is identifiable are provided.

2 code implementations • 26 Oct 2019 • Faisal M. Almutairi, Charilaos I. Kanatsoulis, Nicholas D. Sidiropoulos

The goal of this paper is to reconstruct finer-scale data from multiple coarse views, aggregated over different (subsets of) dimensions.

no code implementations • 27 Jul 2019 • Cheng Qian, Amin Emad, Nicholas D. Sidiropoulos

Time-course gene expression data is a rich source of information that can be used to unravel these complex processes, identify biomarkers of drug sensitivity and predict the response to a drug.

no code implementations • 13 Jun 2019 • Nikos Kargas, Nicholas D. Sidiropoulos

Deep neural networks are currently the most popular method for learning to mimic the input-output relationship of a general nonlinear system, as they have proven to be very effective in approximating complex highly nonlinear functions.

no code implementations • 28 Apr 2019 • Deniz Gunduz, Paul de Kerret, Nicholas D. Sidiropoulos, David Gesbert, Chandra Murthy, Mihaela van der Schaar

Thanks to the recent advances in processing speed and data acquisition and storage, machine learning (ML) is penetrating every facet of our lives, and transforming research in many areas in a fundamental manner.

no code implementations • 2 Apr 2019 • Nikos Kargas, Nicholas D. Sidiropoulos

We study the problem of learning a mixture model of non-parametric product distributions.

no code implementations • 26 Mar 2019 • Ahmed S. Zamzam, Bo Yang, Nicholas D. Sidiropoulos

Energy storage devices represent environmentally friendly candidates to cope with volatile renewable energy generation.

no code implementations • 6 Jan 2019 • Bo Yang, Xiao Fu, Nicholas D. Sidiropoulos, Kejun Huang

Linear mixture models have proven very useful in a plethora of applications, e. g., topic modeling, clustering, and source separation.

no code implementations • 29 Oct 2018 • Cheng Qian, Nicholas D. Sidiropoulos, Magda Amiridi, Amin Emad

Predicting the response of cancer cells to drugs is an important problem in pharmacogenomics.

1 code implementation • 22 Sep 2018 • Vassilis N. Ioannidis, Ahmed S. Zamzam, Georgios B. Giannakis, Nicholas D. Sidiropoulos

The resulting community detection approach is successful even when some links in the graphs are missing.

no code implementations • 24 Apr 2018 • Charilaos I. Kanatsoulis, Xiao Fu, Nicholas D. Sidiropoulos, Mingyi Hong

In this work, we propose a new computational framework for large-scale SUMCOR GCCA that can easily incorporate a suite of structural regularizers which are frequently used in data analytics.

no code implementations • 15 Apr 2018 • Charilaos I. Kanatsoulis, Xiao Fu, Nicholas D. Sidiropoulos, Wing-Kin Ma

Third, the majority of the existing methods assume that there are known (or easily estimated) degradation operators applied to the SRI to form the corresponding HSI and MSI--which is hardly the case in practice.

no code implementations • 3 Mar 2018 • Xiao Fu, Kejun Huang, Nicholas D. Sidiropoulos, Wing-Kin Ma

Perhaps a bit surprisingly, the understanding to its model identifiability---the major reason behind the interpretability in many applications such as topic mining and hyperspectral imaging---had been rather limited until recent years.

no code implementations • ICML 2018 • Kejun Huang, Xiao Fu, Nicholas D. Sidiropoulos

We present a new algorithm for identifying the transition and emission probabilities of a hidden Markov model (HMM) from the emitted data.

no code implementations • 1 Dec 2017 • Nikos Kargas, Nicholas D. Sidiropoulos, Xiao Fu

This paper shows, perhaps surprisingly, that if the joint PMF of any three variables can be estimated, then the joint PMF of all the variables can be provably recovered under relatively mild conditions.

no code implementations • 21 Nov 2017 • Kejun Huang, Nicholas D. Sidiropoulos

We study the problem of nonnegative rank-one approximation of a nonnegative tensor, and show that the globally optimal solution that minimizes the generalized Kullback-Leibler divergence can be efficiently obtained, i. e., it is not NP-hard.

no code implementations • 20 Nov 2017 • Kejun Huang, Xiao Fu, Nicholas D. Sidiropoulos

However, since the procedure involves non-smooth kernel density functions, the convergence behavior of Epanechnikov mean shift lacks theoretical support as of this writing---most of the existing analyses are based on smooth functions and thus cannot be applied to Epanechnikov Mean Shift.

no code implementations • 2 Sep 2017 • Xiao Fu, Kejun Huang, Nicholas D. Sidiropoulos

In this letter, we propose a new identification criterion that guarantees the recovery of the low-rank latent factors in the nonnegative matrix factorization (NMF) model, under mild conditions.

no code implementations • 16 Feb 2017 • Nikos Kargas, Nicholas D. Sidiropoulos

There has recently been considerable interest in completing a low-rank matrix or tensor given only a small fraction (or few linear combinations) of its entries.

no code implementations • NeurIPS 2016 • Kejun Huang, Xiao Fu, Nicholas D. Sidiropoulos

In topic modeling, many algorithms that guarantee identifiability of the topics have been developed under the premise that there exist anchor words -- i. e., words that only appear (with positive probability) in one topic.

9 code implementations • ICML 2017 • Bo Yang, Xiao Fu, Nicholas D. Sidiropoulos, Mingyi Hong

To recover the `clustering-friendly' latent representations and to better cluster the data, we propose a joint DR and K-means clustering approach in which DR is accomplished via learning a deep neural network (DNN).

no code implementations • 15 Aug 2016 • Xiao Fu, Kejun Huang, Bo Yang, Wing-Kin Ma, Nicholas D. Sidiropoulos

This paper considers \emph{volume minimization} (VolMin)-based structured matrix factorization (SMF).

no code implementations • 6 Jul 2016 • Nicholas D. Sidiropoulos, Lieven De Lathauwer, Xiao Fu, Kejun Huang, Evangelos E. Papalexakis, Christos Faloutsos

Tensors or {\em multi-way arrays} are functions of three or more indices $(i, j, k,\cdots)$ -- similar to matrices (two-way arrays), which are functions of two indices $(r, c)$ for (row, column).

no code implementations • 31 May 2016 • Xiao Fu, Kejun Huang, Mingyi Hong, Nicholas D. Sidiropoulos, Anthony Man-Cho So

Generalized canonical correlation analysis (GCCA) aims at finding latent low-dimensional common structure from multiple views (feature vectors in different domains) of the same entities.

no code implementations • 21 May 2016 • Bo Yang, Xiao Fu, Nicholas D. Sidiropoulos

Dimensionality reduction is usually performed in a preprocessing stage that is separate from subsequent data analysis, such as clustering or classification.

no code implementations • 16 Jul 2015 • Xiao Fu, Kejun Huang, Wing-Kin Ma, Nicholas D. Sidiropoulos, Rasmus Bro

Convergence of the proposed algorithm is also easy to analyze under the framework of alternating optimization and its variants.

no code implementations • 13 Jun 2015 • Kejun Huang, Nicholas D. Sidiropoulos, Athanasios P. Liavas

We propose a general algorithmic framework for constrained matrix and tensor factorization, which is widely used in signal processing and machine learning.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.