1 code implementation • 18 Apr 2024 • Jiayi Liang, Haotian Liu, Hongteng Xu, Dixin Luo
Given a pair of real and stylized facial images, the conditional face warper predicts a warping field from the real face to the stylized one, in which the face landmarker predicts the ending points of the warping field and provides us with high-quality pseudo landmarks for the corresponding stylized facial images.
no code implementations • 18 Oct 2023 • Haoran Cheng, Dixin Luo, Hongteng Xu
Given two graphs, we align their node embeddings within the same modality and across different modalities, respectively.
no code implementations • 29 May 2021 • Hongteng Xu, Peilin Zhao, Junzhou Huang, Dixin Luo
A linear graphon factorization model works as a decoder, leveraging the latent representations to reconstruct the induced graphons (and the corresponding observed graphs).
no code implementations • 4 Feb 2021 • Hongteng Xu, Dixin Luo, Hongyuan Zha
We propose a novel framework for modeling multiple multivariate point processes, each with heterogeneous event types that share an underlying space and obey the same generative mechanism.
1 code implementation • 10 Dec 2020 • Hongteng Xu, Dixin Luo, Lawrence Carin, Hongyuan Zha
Accordingly, given a set of graphs generated by an underlying graphon, we learn the corresponding step function as the Gromov-Wasserstein barycenter of the given graphs.
no code implementations • 4 Jun 2020 • Dixin Luo, Hongteng Xu, Lawrence Carin
Traditional multi-view learning methods often rely on two assumptions: ($i$) the samples in different views are well-aligned, and ($ii$) their representations in latent space obey the same distribution.
2 code implementations • ICML 2020 • Hongteng Xu, Dixin Luo, Ricardo Henao, Svati Shah, Lawrence Carin
A new algorithmic framework is proposed for learning autoencoders of data distributions.
no code implementations • 4 Oct 2019 • Dixin Luo, Hongteng Xu, Lawrence Carin
Accordingly, the learned optimal transport reflects the correspondence between the event types of these two Hawkes processes.
no code implementations • 20 Jun 2019 • Dixin Luo, Hongteng Xu, Lawrence Carin
Instead of learning a mixture model directly from a set of event sequences drawn from different Hawkes processes, the proposed method learns the target model iteratively, which generates "easy" sequences and uses them in an adversarial and self-paced manner.
no code implementations • 13 Jun 2019 • Dixin Luo, Hongteng Xu, Lawrence Carin
The proposed method achieves clinically-interpretable embeddings of ICD codes, and outperforms state-of-the-art embedding methods in procedure recommendation.
1 code implementation • NeurIPS 2019 • Hongteng Xu, Dixin Luo, Lawrence Carin
Using this concept, we extend our method to multi-graph partitioning and matching by learning a Gromov-Wasserstein barycenter graph for multiple observed graphs; the barycenter graph plays the role of the disconnected graph, and since it is learned, so is the clustering.
2 code implementations • 17 Jan 2019 • Hongteng Xu, Dixin Luo, Hongyuan Zha, Lawrence Carin
A novel Gromov-Wasserstein learning framework is proposed to jointly match (align) graphs and learn embedding vectors for the associated graph nodes.
no code implementations • 14 Oct 2017 • Hongteng Xu, Dixin Luo, Xu Chen, Lawrence Carin
The superposition of Hawkes processes is demonstrated to be beneficial for tightening the upper bound of excess risk under certain conditions, and we show the feasibility of the benefit in typical situations.
1 code implementation • ICML 2017 • Hongteng Xu, Dixin Luo, Hongyuan Zha
Many real-world applications require robust algorithms to learn point processes based on a type of incomplete data --- the so-called short doubly-censored (SDC) event sequences.