Search Results for author: Kejun Huang

Found 22 papers, 0 papers with code

Adaptive Learning for the Resource-Constrained Classification Problem

no code implementations19 Jul 2022 Danit Shifman Abukasis, Izack Cohen, Xiaochen Xian, Kejun Huang, Gonen Singer

Resource-constrained classification tasks are common in real-world applications such as allocating tests for disease diagnosis, hiring decisions when filling a limited number of positions, and defect detection in manufacturing settings under a limited inspection budget.

Classification Defect Detection

A Novel Convergence Analysis for the Stochastic Proximal Point Algorithm

no code implementations29 Sep 2021 Aysegul Bumin, Kejun Huang

In this paper, we study the stochastic proximal point algorithm (SPPA) for general empirical risk minimization (ERM) problems as well as deep learning problems.

Stochastic Proximal Point Algorithm for Large-scale Nonconvex Optimization: Convergence, Implementation, and Application to Neural Networks

no code implementations1 Jan 2021 Aysegul Bumin, Kejun Huang

SPPA has been shown to converge faster and more stable than the celebrated stochastic gradient descent (SGD) algorithm, and its many variations, for convex problems.

Finding Second-Order Stationary Points Efficiently in Smooth Nonconvex Linearly Constrained Optimization Problems

no code implementations NeurIPS 2020 Songtao Lu, Meisam Razaviyayn, Bo Yang, Kejun Huang, Mingyi Hong

To the best of our knowledge, this is the first time that first-order algorithms with polynomial per-iteration complexity and global sublinear rate are designed to find SOSPs of the important class of non-convex problems with linear constraints (almost surely).

SNAP: Finding Approximate Second-Order Stationary Solutions Efficiently for Non-convex Linearly Constrained Problems

no code implementations9 Jul 2019 Songtao Lu, Meisam Razaviyayn, Bo Yang, Kejun Huang, Mingyi Hong

This paper proposes low-complexity algorithms for finding approximate second-order stationary points (SOSPs) of problems with smooth non-convex objective and linear constraints.

Block-Randomized Stochastic Proximal Gradient for Low-Rank Tensor Factorization

no code implementations16 Jan 2019 Xiao Fu, Shahana Ibrahim, Hoi-To Wai, Cheng Gao, Kejun Huang

In this work, we propose a stochastic optimization framework for large-scale CPD with constraints/regularizations.

Stochastic Optimization

Learning Nonlinear Mixtures: Identifiability and Algorithm

no code implementations6 Jan 2019 Bo Yang, Xiao Fu, Nicholas D. Sidiropoulos, Kejun Huang

Linear mixture models have proven very useful in a plethora of applications, e. g., topic modeling, clustering, and source separation.

Clustering

Nonnegative Matrix Factorization for Signal and Data Analytics: Identifiability, Algorithms, and Applications

no code implementations3 Mar 2018 Xiao Fu, Kejun Huang, Nicholas D. Sidiropoulos, Wing-Kin Ma

Perhaps a bit surprisingly, the understanding to its model identifiability---the major reason behind the interpretability in many applications such as topic mining and hyperspectral imaging---had been rather limited until recent years.

Learning Hidden Markov Models from Pairwise Co-occurrences with Application to Topic Modeling

no code implementations ICML 2018 Kejun Huang, Xiao Fu, Nicholas D. Sidiropoulos

We present a new algorithm for identifying the transition and emission probabilities of a hidden Markov model (HMM) from the emitted data.

Kullback-Leibler Principal Component for Tensors is not NP-hard

no code implementations21 Nov 2017 Kejun Huang, Nicholas D. Sidiropoulos

We study the problem of nonnegative rank-one approximation of a nonnegative tensor, and show that the globally optimal solution that minimizes the generalized Kullback-Leibler divergence can be efficiently obtained, i. e., it is not NP-hard.

On Convergence of Epanechnikov Mean Shift

no code implementations20 Nov 2017 Kejun Huang, Xiao Fu, Nicholas D. Sidiropoulos

However, since the procedure involves non-smooth kernel density functions, the convergence behavior of Epanechnikov mean shift lacks theoretical support as of this writing---most of the existing analyses are based on smooth functions and thus cannot be applied to Epanechnikov Mean Shift.

Clustering

On Identifiability of Nonnegative Matrix Factorization

no code implementations2 Sep 2017 Xiao Fu, Kejun Huang, Nicholas D. Sidiropoulos

In this letter, we propose a new identification criterion that guarantees the recovery of the low-rank latent factors in the nonnegative matrix factorization (NMF) model, under mild conditions.

Anchor-Free Correlated Topic Modeling: Identifiability and Algorithm

no code implementations NeurIPS 2016 Kejun Huang, Xiao Fu, Nicholas D. Sidiropoulos

In topic modeling, many algorithms that guarantee identifiability of the topics have been developed under the premise that there exist anchor words -- i. e., words that only appear (with positive probability) in one topic.

Clustering

Tensor Decomposition for Signal Processing and Machine Learning

no code implementations6 Jul 2016 Nicholas D. Sidiropoulos, Lieven De Lathauwer, Xiao Fu, Kejun Huang, Evangelos E. Papalexakis, Christos Faloutsos

Tensors or {\em multi-way arrays} are functions of three or more indices $(i, j, k,\cdots)$ -- similar to matrices (two-way arrays), which are functions of two indices $(r, c)$ for (row, column).

BIG-bench Machine Learning Collaborative Filtering +1

Scalable and Flexible Multiview MAX-VAR Canonical Correlation Analysis

no code implementations31 May 2016 Xiao Fu, Kejun Huang, Mingyi Hong, Nicholas D. Sidiropoulos, Anthony Man-Cho So

Generalized canonical correlation analysis (GCCA) aims at finding latent low-dimensional common structure from multiple views (feature vectors in different domains) of the same entities.

Joint Tensor Factorization and Outlying Slab Suppression with Applications

no code implementations16 Jul 2015 Xiao Fu, Kejun Huang, Wing-Kin Ma, Nicholas D. Sidiropoulos, Rasmus Bro

Convergence of the proposed algorithm is also easy to analyze under the framework of alternating optimization and its variants.

Speech Separation

A Flexible and Efficient Algorithmic Framework for Constrained Matrix and Tensor Factorization

no code implementations13 Jun 2015 Kejun Huang, Nicholas D. Sidiropoulos, Athanasios P. Liavas

We propose a general algorithmic framework for constrained matrix and tensor factorization, which is widely used in signal processing and machine learning.

Dictionary Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.