1 code implementation • 18 Apr 2024 • Jiayi Liang, Haotian Liu, Hongteng Xu, Dixin Luo
Given a pair of real and stylized facial images, the conditional face warper predicts a warping field from the real face to the stylized one, in which the face landmarker predicts the ending points of the warping field and provides us with high-quality pseudo landmarks for the corresponding stylized facial images.
1 code implementation • 4 Apr 2024 • Zhongxiang Sun, Zihua Si, Xiao Zhang, Xiaoxue Zang, Yang song, Hongteng Xu, Jun Xu
The model, referred to as Neural Hawkes Process-based Open-App Motivation prediction model (NHP-OAM), employs a hierarchical transformer and a novel intensity function to encode multiple factors, and open-app motivation prediction layer to integrate time and user-specific information for predicting users' open-app motivations.
no code implementations • 1 Mar 2024 • Jiaqi Han, Jiacheng Cen, Liming Wu, Zongzhao Li, Xiangzhe Kong, Rui Jiao, Ziyang Yu, Tingyang Xu, Fandi Wu, Zihe Wang, Hongteng Xu, Zhewei Wei, Yang Liu, Yu Rong, Wenbing Huang
Geometric graph is a special kind of graph with geometric features, which is vital to model many scientific problems.
1 code implementation • 26 Oct 2023 • Shen Yuan, Hongteng Xu
The effectiveness of the Transformer is often attributed to its multi-head attention (MHA) mechanism.
no code implementations • 18 Oct 2023 • Haoran Cheng, Dixin Luo, Hongteng Xu
Given two graphs, we align their node embeddings within the same modality and across different modalities, respectively.
1 code implementation • 18 Oct 2023 • Minjie Cheng, Hongteng Xu
To eliminate such inconsistency, in this study we propose a novel Quasi-Wasserstein (QW) loss with the help of the optimal transport defined on graphs, leading to new learning and prediction paradigms of GNNs.
no code implementations • 28 Jan 2023 • Xiangfeng Wang, Hongteng Xu, Moyi Yang
Privacy-preserving distributed distribution comparison measures the distance between the distributions whose data are scattered across different agents in a distributed system and cannot be shared among the agents.
1 code implementation • 6 Jan 2023 • Chuhao Jin, Hongteng Xu, Ruihua Song, Zhiwu Lu
Poster generation is a significant task for a wide range of applications, which is often time-consuming and requires lots of manual editing and artistic experience.
no code implementations • CVPR 2023 • Jiechao Yang, Yong liu, Hongteng Xu
To address these issues, we propose a hierarchical optimal transport metric called HOTNN for measuring the similarity of different networks.
1 code implementation • 13 Dec 2022 • Hongteng Xu, Minjie Cheng
Making the parameters of the ROT problem learnable, we develop a family of regularized optimal transport pooling (ROTP) layers.
no code implementations • 18 Sep 2022 • Yang Zhang, Gengmo Zhou, Zhewei Wei, Hongteng Xu
The prediction of protein-ligand binding affinity is of great significance for discovering lead compounds in drug research.
1 code implementation • 16 Sep 2022 • Lanqing Li, Liang Zeng, Ziqi Gao, Shen Yuan, Yatao Bian, Bingzhe Wu, Hengtong Zhang, Yang Yu, Chan Lu, Zhipeng Zhou, Hongteng Xu, Jia Li, Peilin Zhao, Pheng-Ann Heng
The last decade has witnessed a prosperous development of computational methods and dataset curation for AI-aided drug discovery (AIDD).
1 code implementation • ChemRxiv 2022 • Gengmo Zhou, Zhifeng Gao, Qiankun Ding, Hang Zheng, Hongteng Xu, Zhewei Wei, Linfeng Zhang, Guolin Ke
Uni-Mol is composed of two models with the same SE(3)-equivariant transformer architecture: a molecular pretraining model trained by 209M molecular conformations; a pocket pretraining model trained by 3M candidate protein pocket data.
Ranked #1 on Molecular Property Prediction on MUV
1 code implementation • 9 Jul 2022 • Weijie Yu, Zhongxiang Sun, Jun Xu, Zhenhua Dong, Xu Chen, Hongteng Xu, Ji-Rong Wen
As an essential operation of legal retrieval, legal case matching plays a central role in intelligent legal systems.
1 code implementation • 30 May 2022 • Tao Li, Cheng Meng, Hongteng Xu, Jun Yu
Distribution comparison plays a central role in many machine learning tasks like data classification and generative modeling.
1 code implementation • 26 May 2022 • Mengyu Li, Jun Yu, Hongteng Xu, Cheng Meng
As a valid metric of metric-measure spaces, Gromov-Wasserstein (GW) distance has shown the potential for matching problems of structured data like point clouds and graphs.
no code implementations • 20 Apr 2022 • Tiancheng Lin, Hongteng Xu, Canqian Yang, Yi Xu
When applying multi-instance learning (MIL) to make predictions for bags of instances, the prediction accuracy of an instance often depends on not only the instance itself but also its context in the corresponding bag.
1 code implementation • 23 Jan 2022 • Minjie Cheng, Hongteng Xu
Global pooling is one of the most significant operations in many machine learning models and tasks, whose implementation, however, is often empirical in practice.
1 code implementation • NeurIPS 2021 • Mingguo He, Zhewei Wei, Zengfeng Huang, Hongteng Xu
Many representative graph neural networks, e. g., GPR-GNN and ChebNet, approximate graph convolutions with graph spectral filters.
GPR Node Classification on Non-Homophilic (Heterophilic) Graphs
no code implementations • 29 May 2021 • Hongteng Xu, Peilin Zhao, Junzhou Huang, Dixin Luo
A linear graphon factorization model works as a decoder, leveraging the latent representations to reconstruct the induced graphons (and the corresponding observed graphs).
no code implementations • 4 Feb 2021 • Hongteng Xu, Dixin Luo, Hongyuan Zha
We propose a novel framework for modeling multiple multivariate point processes, each with heterogeneous event types that share an underlying space and obey the same generative mechanism.
1 code implementation • 10 Dec 2020 • Hongteng Xu, Dixin Luo, Lawrence Carin, Hongyuan Zha
Accordingly, given a set of graphs generated by an underlying graphon, we learn the corresponding step function as the Gromov-Wasserstein barycenter of the given graphs.
no code implementations • ICLR 2021 • Yujia Xie, Yixiu Mao, Simiao Zuo, Hongteng Xu, Xiaojing Ye, Tuo Zhao, Hongyuan Zha
Due to the combinatorial nature of the problem, most existing methods are only applicable when the sample size is small, and limited to linear regression models.
no code implementations • 4 Jun 2020 • Dixin Luo, Hongteng Xu, Lawrence Carin
Traditional multi-view learning methods often rely on two assumptions: ($i$) the samples in different views are well-aligned, and ($ii$) their representations in latent space obey the same distribution.
2 code implementations • ICML 2020 • Hongteng Xu, Dixin Luo, Ricardo Henao, Svati Shah, Lawrence Carin
A new algorithmic framework is proposed for learning autoencoders of data distributions.
1 code implementation • CVPR 2020 • Xuan Zhang, Shaofei Qin, Yi Xu, Hongteng Xu
We propose a novel quaternion product unit (QPU) to represent data on 3D rotation groups.
no code implementations • 20 Nov 2019 • Wenlin Wang, Hongteng Xu, Zhe Gan, Bai Li, Guoyin Wang, Liqun Chen, Qian Yang, Wenqi Wang, Lawrence Carin
We propose a novel graph-driven generative model, that unifies multiple heterogeneous learning tasks into the same framework.
1 code implementation • 19 Nov 2019 • Hongteng Xu
The model achieves a novel and flexible factorization mechanism under GW discrepancy, in which both the observed graphs and the learnable atoms can be unaligned and with different sizes.
no code implementations • ICLR 2020 • Wenlin Wang, Hongteng Xu, Ruiyi Zhang, Wenqi Wang, Piyush Rai, Lawrence Carin
To address this, we propose a learning framework that improves collaborative filtering with a synthetic feedback loop (CF-SFL) to simulate the user feedback.
no code implementations • 20 Oct 2019 • Wenlin Wang, Hongteng Xu, Guoyin Wang, Wenqi Wang, Lawrence Carin
{Specifically, we build a conditional generative model to generate features from seen-class attributes, and establish an optimal transport between the distribution of the generated features and that of the real features.}
no code implementations • 4 Oct 2019 • Dixin Luo, Hongteng Xu, Lawrence Carin
Accordingly, the learned optimal transport reflects the correspondence between the event types of these two Hawkes processes.
no code implementations • 20 Jun 2019 • Dixin Luo, Hongteng Xu, Lawrence Carin
Instead of learning a mixture model directly from a set of event sequences drawn from different Hawkes processes, the proposed method learns the target model iteratively, which generates "easy" sequences and uses them in an adversarial and self-paced manner.
no code implementations • 13 Jun 2019 • Dixin Luo, Hongteng Xu, Lawrence Carin
The proposed method achieves clinically-interpretable embeddings of ICD codes, and outperforms state-of-the-art embedding methods in procedure recommendation.
no code implementations • NAACL 2019 • Wenlin Wang, Zhe Gan, Hongteng Xu, Ruiyi Zhang, Guoyin Wang, Dinghan Shen, Changyou Chen, Lawrence Carin
We propose a topic-guided variational auto-encoder (TGVAE) model for text generation.
1 code implementation • NeurIPS 2019 • Hongteng Xu, Dixin Luo, Lawrence Carin
Using this concept, we extend our method to multi-graph partitioning and matching by learning a Gromov-Wasserstein barycenter graph for multiple observed graphs; the barycenter graph plays the role of the disconnected graph, and since it is learned, so is the clustering.
no code implementations • 17 Mar 2019 • Wenlin Wang, Zhe Gan, Hongteng Xu, Ruiyi Zhang, Guoyin Wang, Dinghan Shen, Changyou Chen, Lawrence Carin
We propose a topic-guided variational autoencoder (TGVAE) model for text generation.
1 code implementation • ECCV 2018 • Xuanyu Zhu, Yi Xu, Hongteng Xu, Changjian Chen
Neural networks in the real domain have been studied for a long time and achieved promising results in many vision tasks for recent years.
2 code implementations • 17 Jan 2019 • Hongteng Xu, Dixin Luo, Hongyuan Zha, Lawrence Carin
A novel Gromov-Wasserstein learning framework is proposed to jointly match (align) graphs and learn embedding vectors for the associated graph nodes.
1 code implementation • 23 Oct 2018 • Hongteng Xu
The goal of PoPPy is providing a user-friendly solution to the key points above and achieving large-scale point process-based sequential data analysis, simulation and prediction.
no code implementations • NeurIPS 2018 • Hongteng Xu, Wenlin Wang, Wei Liu, Lawrence Carin
When learning the topic model, we leverage a distilled underlying distance matrix to update the topic distributions and smoothly calculate the corresponding optimal transports.
no code implementations • 5 Sep 2018 • Matthew Engelhard, Hongteng Xu, Lawrence Carin, Jason A Oliver, Matthew Hallyburton, F Joseph McClernon
Health risks from cigarette smoking -- the leading cause of preventable death in the United States -- can be substantially reduced by quitting.
no code implementations • 20 Apr 2018 • Shiyu Ning, Hongteng Xu, Li Song, Rong Xie, Wenjun Zhang
Transferring a low-dynamic-range (LDR) image to a high-dynamic-range (HDR) image, which is the so-called inverse tone mapping (iTM), is an important imaging technique to improve visual effects of imaging devices.
no code implementations • 13 Feb 2018 • Hongteng Xu, Xu Chen, Lawrence Carin
We consider the learning of multi-agent Hawkes processes, a model containing multiple Hawkes processes with shared endogenous impact functions and different exogenous intensities.
no code implementations • 31 Jan 2018 • Xu Chen, Yongfeng Zhang, Hongteng Xu, Yixin Cao, Zheng Qin, Hongyuan Zha
By this, we can not only provide recommendation results to the users, but also tell the users why an item is recommended by providing intuitive visual highlights in a personalized manner.
no code implementations • 25 Oct 2017 • Hongteng Xu, Licheng Yu, Mark Davenport, Hongyuan Zha
Active manifold learning aims to select and label representative landmarks on a manifold from a given set of samples to improve semi-supervised manifold learning.
no code implementations • 14 Oct 2017 • Hongteng Xu, Dixin Luo, Xu Chen, Lawrence Carin
The superposition of Hawkes processes is demonstrated to be beneficial for tightening the upper bound of excess risk under certain conditions, and we show the feasibility of the benefit in typical situations.
no code implementations • ICML 2018 • Hongteng Xu, Lawrence Carin, Hongyuan Zha
A parametric point process model is developed, with modeling based on the assumption that sequential observations often share latent phenomena, while also possessing idiosyncratic effects.
no code implementations • 13 Sep 2017 • Lixue Zhuang, Yi Xu, Bingbing Ni, Hongteng Xu
In this work, we reveal an important fact that binarizing different layers has a widely-varied effect on the compression ratio of network and the loss of performance.
1 code implementation • 28 Aug 2017 • Hongteng Xu, Hongyuan Zha
As a powerful tool of asynchronous event sequence analysis, point processes have been studied for a long time and achieved numerous successes in different fields.
1 code implementation • ICML 2017 • Hongteng Xu, Dixin Luo, Hongyuan Zha
Many real-world applications require robust algorithms to learn point processes based on a type of incomplete data --- the so-called short doubly-censored (SDC) event sequences.
1 code implementation • NeurIPS 2017 • Hongteng Xu, Hongyuan Zha
We propose an effective method to solve the event sequence clustering problems based on a novel Dirichlet mixture model of a special but significant type of point processes --- Hawkes process.
no code implementations • 10 Sep 2016 • Weiyao Lin, Yang Zhou, Hongteng Xu, Junchi Yan, Mingliang Xu, Jianxin Wu, Zicheng Liu
Our approach first leverages the complete information from given trajectories to construct a thermal transfer field which provides a context-rich way to describe the global motion pattern in a scene.
no code implementations • CVPR 2017 • Hongteng Xu, Junchi Yan, Nils Persson, Weiyao Lin, Hongyuan Zha
By adding a nonlinear post-processing step behind anisotropic filter banks, we demonstrate that the proposed filtering method is capable of preserving the local invariance of the fractal dimension of image.
no code implementations • 14 Feb 2016 • Hongteng Xu, Mehrdad Farajtabar, Hongyuan Zha
In this paper, we propose an effective method, learning Granger causality, for a special but significant type of point processes --- Hawkes process.
no code implementations • 14 Feb 2016 • Hongteng Xu, Weichang Wu, Shamim Nemati, Hongyuan Zha
By treating a sequence of transition events as a point process, we develop a novel framework for modeling patient flow through various CUs and jointly predicting patients' destination CUs and duration days.
no code implementations • ICCV 2015 • Junchi Yan, Hongteng Xu, Hongyuan Zha, Xiaokang Yang, Huanxi Liu, Stephen Chu
Graph matching has a wide spectrum of real-world applications and in general is known NP-hard.
no code implementations • ICCV 2015 • Hongteng Xu, Yang Zhou, Weiyao Lin, Hongyuan Zha
Facing to the challenges of trajectory clustering, e. g., large variations within a cluster and ambiguities across clusters, we first introduce an adaptive multi-kernel-based estimation process to estimate the `shrunk' positions and speeds of trajectories' points.
no code implementations • CVPR 2014 • Hongteng Xu, Hongyuan Zha, Mark A. Davenport
In this paper, we present a novel method to synthesize dynamic texture sequences from extremely few samples, e. g., merely two possibly disparate frames, leveraging both Markov Random Fields (MRFs) and manifold learning.