Search Results for author: Dixin Luo

Found 14 papers, 6 papers with code

Generalizable Face Landmarking Guided by Conditional Face Warping

1 code implementation18 Apr 2024 Jiayi Liang, Haotian Liu, Hongteng Xu, Dixin Luo

Given a pair of real and stylized facial images, the conditional face warper predicts a warping field from the real face to the stylized one, in which the face landmarker predicts the ending points of the warping field and provides us with high-quality pseudo landmarks for the corresponding stylized facial images.

Domain Adaptation

Robust Graph Matching Using An Unbalanced Hierarchical Optimal Transport Framework

no code implementations18 Oct 2023 Haoran Cheng, Dixin Luo, Hongteng Xu

Given two graphs, we align their node embeddings within the same modality and across different modalities, respectively.

Graph Matching

Learning Graphon Autoencoders for Generative Graph Modeling

no code implementations29 May 2021 Hongteng Xu, Peilin Zhao, Junzhou Huang, Dixin Luo

A linear graphon factorization model works as a decoder, leveraging the latent representations to reconstruct the induced graphons (and the corresponding observed graphs).

Hawkes Processes on Graphons

no code implementations4 Feb 2021 Hongteng Xu, Dixin Luo, Hongyuan Zha

We propose a novel framework for modeling multiple multivariate point processes, each with heterogeneous event types that share an underlying space and obey the same generative mechanism.

Point Processes

Learning Graphons via Structured Gromov-Wasserstein Barycenters

1 code implementation10 Dec 2020 Hongteng Xu, Dixin Luo, Lawrence Carin, Hongyuan Zha

Accordingly, given a set of graphs generated by an underlying graphon, we learn the corresponding step function as the Gromov-Wasserstein barycenter of the given graphs.

LEMMA

Hierarchical Optimal Transport for Robust Multi-View Learning

no code implementations4 Jun 2020 Dixin Luo, Hongteng Xu, Lawrence Carin

Traditional multi-view learning methods often rely on two assumptions: ($i$) the samples in different views are well-aligned, and ($ii$) their representations in latent space obey the same distribution.

Clustering MULTI-VIEW LEARNING

Fused Gromov-Wasserstein Alignment for Hawkes Processes

no code implementations4 Oct 2019 Dixin Luo, Hongteng Xu, Lawrence Carin

Accordingly, the learned optimal transport reflects the correspondence between the event types of these two Hawkes processes.

Adversarial Self-Paced Learning for Mixture Models of Hawkes Processes

no code implementations20 Jun 2019 Dixin Luo, Hongteng Xu, Lawrence Carin

Instead of learning a mixture model directly from a set of event sequences drawn from different Hawkes processes, the proposed method learns the target model iteratively, which generates "easy" sequences and uses them in an adversarial and self-paced manner.

Data Augmentation

Interpretable ICD Code Embeddings with Self- and Mutual-Attention Mechanisms

no code implementations13 Jun 2019 Dixin Luo, Hongteng Xu, Lawrence Carin

The proposed method achieves clinically-interpretable embeddings of ICD codes, and outperforms state-of-the-art embedding methods in procedure recommendation.

Scalable Gromov-Wasserstein Learning for Graph Partitioning and Matching

1 code implementation NeurIPS 2019 Hongteng Xu, Dixin Luo, Lawrence Carin

Using this concept, we extend our method to multi-graph partitioning and matching by learning a Gromov-Wasserstein barycenter graph for multiple observed graphs; the barycenter graph plays the role of the disconnected graph, and since it is learned, so is the clustering.

Clustering Graph Matching +1

Gromov-Wasserstein Learning for Graph Matching and Node Embedding

2 code implementations17 Jan 2019 Hongteng Xu, Dixin Luo, Hongyuan Zha, Lawrence Carin

A novel Gromov-Wasserstein learning framework is proposed to jointly match (align) graphs and learn embedding vectors for the associated graph nodes.

Graph Matching

Benefits from Superposed Hawkes Processes

no code implementations14 Oct 2017 Hongteng Xu, Dixin Luo, Xu Chen, Lawrence Carin

The superposition of Hawkes processes is demonstrated to be beneficial for tightening the upper bound of excess risk under certain conditions, and we show the feasibility of the benefit in typical situations.

Point Processes Recommendation Systems

Learning Hawkes Processes from Short Doubly-Censored Event Sequences

1 code implementation ICML 2017 Hongteng Xu, Dixin Luo, Hongyuan Zha

Many real-world applications require robust algorithms to learn point processes based on a type of incomplete data --- the so-called short doubly-censored (SDC) event sequences.

Point Processes

Cannot find the paper you are looking for? You can Submit a new open access paper.