no code implementations • EMNLP 2021 • Tao Zhang, Congying Xia, Philip S. Yu, Zhiwei Liu, Shu Zhao
Cross-domain Named Entity Recognition (NER) transfers the NER knowledge from high-resource domains to the low-resource target domain.
no code implementations • EMNLP 2020 • Shen Wang, Xiaokai Wei, Cicero Nogueira dos santos, Zhiguo Wang, Ramesh Nallapati, Andrew Arnold, Bing Xiang, Philip S. Yu
Existing knowledge graph embedding approaches concentrate on modeling symmetry/asymmetry, inversion, and composition typed relations but overlook the hierarchical nature of relations.
no code implementations • 31 Jan 2023 • Hengrui Zhang, Shen Wang, Vassilis N. Ioannidis, Soji Adeshina, Jiani Zhang, Xiao Qin, Christos Faloutsos, Da Zheng, George Karypis, Philip S. Yu
Graph Neural Networks (GNNs) are currently dominating in modeling graph-structure data, while their high reliance on graph structure for inference significantly impedes them from widespread applications.
no code implementations • 22 Jan 2023 • Lilin Zhang, Ning Yang, Yanchao Sun, Philip S. Yu
Second, the existing AT methods often achieve adversarial robustness at the expense of standard generalizability (i. e., the accuracy on natural examples) because they make a tradeoff between them.
no code implementations • 10 Jan 2023 • Xiaohan Li, Yuqing Liu, Zheng Liu, Philip S. Yu
TA-HGAT is built in a hyperbolic space to learn the hierarchical structure of session graphs.
no code implementations • 30 Dec 2022 • Qingyun Sun, JianXin Li, Beining Yang, Xingcheng Fu, Hao Peng, Philip S. Yu
Most Graph Neural Networks follow the message-passing paradigm, assuming the observed structure depicts the ground-truth node relationships.
1 code implementation • 29 Dec 2022 • Chunkai Zhang, Yuting Yang, Zilin Du, Wensheng Gan, Philip S. Yu
High-utility sequential pattern mining (HUSPM) has emerged as an important topic due to its wide application and considerable popularity.
no code implementations • 20 Dec 2022 • Gengsen Huang, Wensheng Gan, Philip S. Yu
An algorithm called Sequence Utility Maximization with Utility occupancy measure (SUMU) is proposed.
no code implementations • 8 Dec 2022 • Hengrui Zhang, Qitian Wu, Yu Wang, Shaofeng Zhang, Junchi Yan, Philip S. Yu
Contrastive learning methods based on InfoNCE loss are popular in node representation learning tasks on graph-structured data.
no code implementations • 1 Dec 2022 • Jiangshu Du, Wenpeng Yin, Congying Xia, Philip S. Yu
To deal with the two issues, this work first proposes a contextualized TE model (Context-TE) by appending other k options as the context of the current (P, H) modeling.
no code implementations • 30 Nov 2022 • Li Sun, Junda Ye, Hao Peng, Feiyang Wang, Philip S. Yu
On the one hand, existing methods work with the zero-curvature Euclidean space, and largely ignore the fact that curvature varies over the coming graph sequence.
1 code implementation • 18 Nov 2022 • Liangwei Yang, Shengjie Wang, Yunzhe Tao, Jiankai Sun, Xiaolong Liu, Philip S. Yu, Taiqing Wang
Graph Neural Network (GNN) based recommender systems have been attracting more and more attention in recent years due to their excellent performance in accuracy.
1 code implementation • 14 Nov 2022 • Liangwei Yang, Shen Wang, Jibing Gong, Shaojie Zheng, Shuying Du, Zhiwei Liu, Philip S. Yu
To fill this gap, in this paper, we explore the rich, heterogeneous relationship among items and propose a new KG-enhanced recommendation model called Collaborative Meta-Knowledge Enhanced Recommender System (MetaKRec).
no code implementations • 11 Nov 2022 • Xuming Hu, Shiao Meng, Chenwei Zhang, Xiangli Yang, Lijie Wen, Irwin King, Philip S. Yu
Low-Resource Information Extraction (LRIE) strives to use unsupervised data, reducing the required resources and human annotation.
1 code implementation • 2 Nov 2022 • Mingdai Yang, Zhiwei Liu, Liangwei Yang, Xiaolong Liu, Chen Wang, Hao Peng, Philip S. Yu
PA layers efficiently learn the relatedness of non-neighbor nodes to improve the information propagation to users.
no code implementations • 28 Oct 2022 • Byung-Hak Kim, Zhongfen Deng, Philip S. Yu, Varun Ganapathi
The medical codes prediction problem from clinical notes has received substantial interest in the NLP community, and several recent studies have shown the state-of-the-art (SOTA) code prediction results of full-fledged deep learning-based methods.
1 code implementation • 24 Oct 2022 • Ziwei Fan, Zhiwei Liu, Chen Wang, Peijie Huang, Hao Peng, Philip S. Yu
However, it remains a significant challenge to model auxiliary item relationships in SR. To simultaneously model high-order item-item transitions in sequences and auxiliary item relationships, we propose a Multi-relational Transformer capable of modeling auxiliary item relationships for SR (MT4SR).
no code implementations • 19 Oct 2022 • Xuming Hu, Yong Jiang, Aiwei Liu, Zhongqiang Huang, Pengjun Xie, Fei Huang, Lijie Wen, Philip S. Yu
To alleviate the excessive reliance on the dependency order among entities in existing augmentation paradigms, we develop an entity-to-text instead of text-to-entity based data augmentation method named: EnTDA to decouple the dependencies between entities by adding, deleting, replacing and swapping entities, and adopt these augmented data to bootstrap the generalization ability of the NER model.
no code implementations • 13 Oct 2022 • Jianpeng Chen, Yawen Ling, Jie Xu, Yazhou Ren, Shudong Huang, Xiaorong Pu, Zhifeng Hao, Philip S. Yu, Lifang He
The critical point of MGC is to better utilize the view-specific and view-common information in features and graphs of multiple views.
no code implementations • 9 Oct 2022 • Yazhou Ren, Jingyu Pu, Zhimeng Yang, Jie Xu, Guofeng Li, Xiaorong Pu, Philip S. Yu, Lifang He
Finally, we discuss the open challenges and potential future opportunities in different fields of deep clustering.
no code implementations • 27 Sep 2022 • Chunkai Zhang, Maohua Lyu, Wensheng Gan, Philip S. Yu
TotalSR creates a utility table that can efficiently calculate antecedent support and a utility prefix sum list that can compute the remaining utility in O(1) time for a sequence.
no code implementations • 27 Sep 2022 • Yao Chen, Wensheng Gan, Yongdong Wu, Philip S. Yu
Contrast pattern mining (CPM) is an important and popular subfield of data mining.
no code implementations • COLING 2022 • Xuming Hu, Zhijiang Guo, Yu Fu, Lijie Wen, Philip S. Yu
A scene graph is a semantic representation that expresses the objects, attributes, and relationships between objects in a scene.
no code implementations • 4 Sep 2022 • Jiaqian Ren, Lei Jiang, Hao Peng, Lingjuan Lyu, Zhiwei Liu, Chaochao Chen, Jia Wu, Xu Bai, Philip S. Yu
Integrating multiple online social networks (OSNs) has important implications for many downstream social mining tasks, such as user preference modelling, recommendation, and link prediction.
no code implementations • 30 Aug 2022 • Li Sun, Junda Ye, Hao Peng, Philip S. Yu
To bridge this gap, we make the first attempt to study the problem of self-supervised temporal graph representation learning in the general Riemannian space, supporting the time-varying curvature to shift among hyperspherical, Euclidean and hyperbolic spaces.
1 code implementation • 27 Aug 2022 • Yu Wang, Hengrui Zhang, Zhiwei Liu, Liangwei Yang, Philip S. Yu
Then we propose Contrastive Variational AutoEncoder (ContrastVAE in short), a two-branched VAE model with contrastive regularization as an embodiment of ContrastELBO for sequential recommendation.
no code implementations • 27 Aug 2022 • Jiahui Chen, Xu Guo, Wensheng Gan, Shichen Wan, Philip S. Yu
Compared with traditional utility mining, OSUM can find more practical and meaningful patterns in real-life applications.
1 code implementation • 17 Aug 2022 • Qingyun Sun, JianXin Li, Haonan Yuan, Xingcheng Fu, Hao Peng, Cheng Ji, Qian Li, Philip S. Yu
Topology-imbalance is a graph-specific imbalance problem caused by the uneven topology positions of labeled nodes, which significantly damages the performance of GNNs.
1 code implementation • 9 Aug 2022 • Ruitong Zhang, Hao Peng, Yingtong Dou, Jia Wu, Qingyun Sun, Jingyi Zhang, Philip S. Yu
DBSCAN is widely used in many scientific and engineering fields because of its simplicity and practicality.
1 code implementation • 21 Jun 2022 • Kay Liu, Yingtong Dou, Yue Zhao, Xueying Ding, Xiyang Hu, Ruitong Zhang, Kaize Ding, Canyu Chen, Hao Peng, Kai Shu, Lichao Sun, Jundong Li, George H. Chen, Zhihao Jia, Philip S. Yu
To bridge this gap, we present--to the best of our knowledge--the first comprehensive benchmark for unsupervised outlier node detection on static attributed graphs called BOND, with the following highlights.
no code implementations • 15 Jun 2022 • Yue Wang, Yao Wan, Lu Bai, Lixin Cui, Zhuo Xu, Ming Li, Philip S. Yu, Edwin R Hancock
To alleviate the challenges of building Knowledge Graphs (KG) from scratch, a more general task is to enrich a KG using triples from an open corpus, where the obtained triples contain noisy entities and relations.
no code implementations • 9 Jun 2022 • Wensheng Gan, Gengsen Huang, Jian Weng, Tianlong Gu, Philip S. Yu
In this paper, we provide the relevant definitions of target sequential rule and formulate the problem of targeted sequential rule mining.
1 code implementation • NAACL 2022 • Xuming Hu, Zhijiang Guo, Guanyu Wu, Aiwei Liu, Lijie Wen, Philip S. Yu
The explosion of misinformation spreading in the media ecosystem urges for automated fact-checking.
1 code implementation • 3 Jun 2022 • Yizhen Zheng, Shirui Pan, Vincent CS Lee, Yu Zheng, Philip S. Yu
Instead of similarity computation, GGD directly discriminates two groups of node samples with a very simple binary cross-entropy loss.
no code implementations • 31 May 2022 • Shu'ang Li, Xuming Hu, Li Lin, Aiwei Liu, Lijie Wen, Philip S. Yu
Natural Language Inference (NLI) is a growingly essential task in natural language understanding, which requires inferring the relationship between the sentence pairs (premise and hypothesis).
no code implementations • 24 May 2022 • Jiaqian Ren, Lei Jiang, Hao Peng, Zhiwei Liu, Jia Wu, Philip S. Yu
To incorporate temporal information into the message passing scheme, we introduce a novel temporal-aware aggregator which assigns weights to neighbours according to an adaptive time exponential decay formula.
1 code implementation • NAACL 2022 • Shuliang Liu, Xuming Hu, Chenwei Zhang, Shu`ang Li, Lijie Wen, Philip S. Yu
Unsupervised relation extraction aims to extract the relationship between entities from natural language sentences without prior information on relational scope or distribution.
1 code implementation • Findings (NAACL) 2022 • Yuwei Cao, William Groves, Tanay Kumar Saha, Joel R. Tetreault, Alex Jaimes, Hao Peng, Philip S. Yu
To date, work in this area has mostly focused on English as there is a scarcity of labeled data for other languages.
1 code implementation • 26 Apr 2022 • Kay Liu, Yingtong Dou, Yue Zhao, Xueying Ding, Xiyang Hu, Ruitong Zhang, Kaize Ding, Canyu Chen, Hao Peng, Kai Shu, George H. Chen, Zhihao Jia, Philip S. Yu
PyGOD is an open-source Python library for detecting outliers on graph data.
no code implementations • 1 Apr 2022 • TingTing Liang, Yixuan Jiang, Congying Xia, Ziqiang Zhao, Yuyu Yin, Philip S. Yu
Recently, conversational OpenQA is proposed to address these issues with the abundant contextual information in the conversation.
1 code implementation • 25 Mar 2022 • Zhiwei Liu, Yongjun Chen, Jia Li, Man Luo, Philip S. Yu, Caiming Xiong
However, existing methods all construct views by adopting augmentation from data perspectives, while we argue that 1) optimal data augmentation methods are hard to devise, 2) data augmentation methods destroy sequential correlations, and 3) data augmentation fails to incorporate comprehensive self-supervised signals.
no code implementations • 18 Mar 2022 • Xusheng Zhao, Jia Wu, Hao Peng, Amin Beheshti, Jessica J. M. Monaghan, David Mcalpine, Heivet Hernandez-Perez, Mark Dras, Qiong Dai, Yangyang Li, Philip S. Yu, Lifang He
Modern neuroimaging techniques, such as diffusion tensor imaging (DTI) and functional magnetic resonance imaging (fMRI), enable us to model the human brain as a brain network or connectome.
no code implementations • 12 Mar 2022 • Zhi-Hong Deng, Chang-Dong Wang, Ling Huang, Jian-Huang Lai, Philip S. Yu
G$^3$SR decomposes the session-based recommendation workflow into two steps.
1 code implementation • Findings (EMNLP) 2021 • Wenting Zhao, Ye Liu, Yao Wan, Philip S. Yu
Few-shot table-to-text generation is a task of composing fluent and faithful sentences to convey table content using limited data.
no code implementations • 26 Feb 2022 • Gengsen Huang, Wensheng Gan, Philip S. Yu
What's more, to improve the efficiency of TaSPM on large-scale datasets and multiple-items-based sequence datasets, we propose several pruning strategies to reduce meaningless operations in mining processes.
no code implementations • 26 Feb 2022 • Wensheng Gan, Guoting Chen, Hongzhi Yin, Philippe Fournier-Viger, Chien-Ming Chen, Philip S. Yu
To fulfill this gap, in this paper, we first propose a general profit-oriented framework to address the problem of revenue maximization based on economic behavior, and compute the 0n-shelf Popular and most Profitable Products (OPPPs) for the targeted marketing.
1 code implementation • 17 Feb 2022 • Sixiao Zhang, Hongxu Chen, Haoran Yang, Xiangguo Sun, Philip S. Yu, Guandong Xu
In this paper, we propose Graph Masked Autoencoders (GMAEs), a self-supervised transformer-based model for learning graph representations.
no code implementations • 14 Feb 2022 • Xin Zheng, Yixin Liu, Shirui Pan, Miao Zhang, Di Jin, Philip S. Yu
Recent years have witnessed fast developments of graph neural networks (GNNs) that have benefited myriads of graph analytic tasks and applications.
no code implementations • 8 Feb 2022 • Xiaoqin Pan, Xuan Lin, Dongsheng Cao, Xiangxiang Zeng, Philip S. Yu, Lifang He, Ruth Nussinov, Feixiong Cheng
Drug development is time-consuming and expensive.
1 code implementation • 7 Feb 2022 • Liangwei Yang, Zhiwei Liu, Yu Wang, Chen Wang, Ziwei Fan, Philip S. Yu
We conduct a comprehensive analysis of users' online game behaviors, which motivates the necessity of handling those three characteristics in the online game recommendation.
no code implementations • 25 Jan 2022 • Daokun Zhang, Jie Yin, Philip S. Yu
To generate informative node embeddings for link prediction, structural context prediction is leveraged as a self-supervised learning task to boost the link prediction performance.
no code implementations • 19 Jan 2022 • Haoran Yang, Hongxu Chen, Shirui Pan, Lin Li, Philip S. Yu, Guandong Xu
In addition, we conduct extensive experiments to analyze the impact of different graph encoders on DSGC, giving insights about how to better leverage the advantages of contrastive learning between different spaces.
1 code implementation • 16 Jan 2022 • Ziwei Fan, Zhiwei Liu, Alice Wang, Zahra Nazari, Lei Zheng, Hao Peng, Philip S. Yu
We further argue that BPR loss has no constraint on positive and sampled negative items, which misleads the optimization.
no code implementations • 16 Jan 2022 • Ziwen Du, Ning Yang, Zhonghua Yu, Philip S. Yu
To address this challenges, we propose a novel model called Temporary Interest Aware Recommendation (TIARec), which can distinguish atypical interactions from normal ones without supervision and capture the temporary interest as well as the general preference of users.
no code implementations • 16 Jan 2022 • Xiaoyun Zhao, Ning Yang, Philip S. Yu
Meanwhile, we propose a Multi-Domain Adaptation Network (MDAN) for MSDCR to capture a user's domain-invariant aspect preference.
no code implementations • 15 Jan 2022 • Yuefei Lyu, Xiaoyu Yang, Jiaxin Liu, Philip S. Yu, Sihong Xie, Xi Zhang
To discover subtle vulnerabilities, we design a powerful attacking algorithm to camouflage rumors in social networks based on reinforcement learning that can interact with and attack any black-box detectors.
1 code implementation • 16 Dec 2021 • Qingyun Sun, JianXin Li, Hao Peng, Jia Wu, Xingcheng Fu, Cheng Ji, Philip S. Yu
Graph Neural Networks (GNNs) have shown promising results on a broad spectrum of applications.
no code implementations • 14 Dec 2021 • Yiqi Wang, Chaozhuo Li, Zheng Liu, Mingzheng Li, Jiliang Tang, Xing Xie, Lei Chen, Philip S. Yu
Thus, graph pre-training has the great potential to alleviate data sparsity in GNN-based recommendations.
no code implementations • 10 Dec 2021 • Li Sun, Zhongbao Zhang, Junda Ye, Hao Peng, Jiawei Zhang, Sen Su, Philip S. Yu
Instead of working on one single constant-curvature space, we construct a mixed-curvature space via the Cartesian product of multiple Riemannian component spaces and design hierarchical attention mechanisms for learning and fusing the representations across these component spaces.
no code implementations • 29 Nov 2021 • Gengsen Huang, Wensheng Gan, Jian Weng, Philip S. Yu
High utility sequential pattern mining (HUSPM) is one kind of utility-driven mining.
no code implementations • 28 Nov 2021 • Xiaohan Li, Zhiwei Liu, Stephen Guo, Zheng Liu, Hao Peng, Philip S. Yu, Kannan Achan
In this paper, we propose a novel Reinforced Attentive Multi-relational Graph Neural Network (RAM-GNN) to the pre-train user and item embeddings on the user and item graph prior to the recommendation step.
no code implementations • 25 Nov 2021 • Chuanpan Zheng, Xiaoliang Fan, Shirui Pan, Zonghan Wu, Cheng Wang, Philip S. Yu
In such a graph, the correlations between different nodes at different time steps are not explicitly reflected, which may restrict the learning ability of graph neural networks.
no code implementations • 24 Nov 2021 • Yicong Li, Hongxu Chen, Yile Li, Lin Li, Philip S. Yu, Guandong Xu
Recent advances in path-based explainable recommendation systems have attracted increasing attention thanks to the rich information provided by knowledge graphs.
no code implementations • 21 Nov 2021 • Zhiwei Liu, Liangwei Yang, Ziwei Fan, Hao Peng, Philip S. Yu
However, they all require centralized storage of the social links and item interactions of users, which leads to privacy concerns.
no code implementations • 16 Nov 2021 • Chen Wang, Yueqing Liang, Zhiwei Liu, Tao Zhang, Philip S. Yu
Then, we transfer the pre-trained graph encoder to initialize the node embeddings on the target domain, which benefits the fine-tuning of the single domain recommender system on the target domain.
no code implementations • 29 Oct 2021 • Nooshin Mojab, Philip S. Yu, Joelle A. Hallak, Darvin Yi
The success of deep learning methods relies heavily on the availability of a large amount of data.
1 code implementation • Findings (EMNLP) 2021 • Ye Liu, Kazuma Hashimoto, Yingbo Zhou, Semih Yavuz, Caiming Xiong, Philip S. Yu
In this work, we propose Dense Hierarchical Retrieval (DHR), a hierarchical framework that can generate accurate dense representations of passages by utilizing both macroscopic semantics in the document and microscopic semantics specific to each passage.
1 code implementation • 15 Oct 2021 • Xingcheng Fu, JianXin Li, Jia Wu, Qingyun Sun, Cheng Ji, Senzhang Wang, Jiajun Tan, Hao Peng, Philip S. Yu
Hyperbolic Graph Neural Networks(HGNNs) extend GNNs to hyperbolic space and thus are more effective to capture the hierarchical structures of graphs in node representation learning.
no code implementations • 14 Oct 2021 • Yang Shu, Zhangjie Cao, Jinghan Gao, Jianmin Wang, Philip S. Yu, Mingsheng Long
While pre-training and meta-training can create deep models powerful for few-shot generalization, we find that pre-training and meta-training focuses respectively on cross-domain transferability and cross-task transferability, which restricts their data efficiency in the entangled settings of domain shift and task shift.
1 code implementation • 13 Oct 2021 • Jiangshu Du, Yingtong Dou, Congying Xia, Limeng Cui, Jing Ma, Philip S. Yu
The COVID-19 pandemic poses a great threat to global public health.
1 code implementation • EMNLP 2021 • Ye Liu, Jian-Guo Zhang, Yao Wan, Congying Xia, Lifang He, Philip S. Yu
To capture the semantic graph structure from raw text, most existing summarization approaches are built on GNNs with a pre-trained model.
no code implementations • 4 Oct 2021 • Chen Wang, Yingtong Dou, Min Chen, Jia Chen, Zhiwei Liu, Philip S. Yu
The successes of most previous methods heavily rely on rich node features and high-fidelity labels.
no code implementations • 29 Sep 2021 • Zhiwei Liu, Yongjun Chen, Jia Li, Man Luo, Philip S. Yu, Caiming Xiong
However, existing methods all construct views by adopting augmentation from data perspectives, while we argue that 1) optimal data augmentation methods are hard to devise, 2) data augmentation methods destroy sequential correlations, and 3) data augmentation fails to incorporate comprehensive self-supervised signals.
no code implementations • 29 Sep 2021 • Hengrui Zhang, Qitian Wu, Shaofeng Zhang, Junchi Yan, David Wipf, Philip S. Yu
In this paper, we propose ESCo (Effective and Scalable Contrastive), a new contrastive framework which is essentially an instantiation of the Information Bottleneck principle under self-supervised learning settings.
1 code implementation • EMNLP 2021 • Xuming Hu, Chenwei Zhang, Yawen Yang, Xiaohe Li, Li Lin, Lijie Wen, Philip S. Yu
Low-resource Relation Extraction (LRE) aims to extract relation facts from limited labeled corpora when human annotation is scarce.
no code implementations • 7 Sep 2021 • Haoran Yang, Hongxu Chen, Lin Li, Philip S. Yu, Guandong Xu
They utilize simple and fixed schemes, like neighborhood information aggregation or mathematical calculation of vectors, to fuse the embeddings of different user behaviors to obtain a unified embedding to represent a user's behavioral patterns which will be used in downstream recommendation tasks.
1 code implementation • 26 Aug 2021 • Yu Wang, Zhiwei Liu, Ziwei Fan, Lichao Sun, Philip S. Yu
In the information explosion era, recommender systems (RSs) are widely studied and applied to discover user-preferred information.
1 code implementation • 14 Aug 2021 • Ziwei Fan, Zhiwei Liu, Jiawei Zhang, Yun Xiong, Lei Zheng, Philip S. Yu
Therefore, we propose to unify sequential patterns and temporal collaborative signals to improve the quality of recommendation, which is rather challenging.
1 code implementation • 14 Aug 2021 • Zhiwei Liu, Yongjun Chen, Jia Li, Philip S. Yu, Julian McAuley, Caiming Xiong
In this paper, we investigate the application of contrastive Self-Supervised Learning (SSL) to the sequential recommendation, as a way to alleviate some of these issues.
no code implementations • 5 Jul 2021 • Qian Li, JianXin Li, Jiawei Sheng, Shiyao Cui, Jia Wu, Yiming Hei, Hao Peng, Shu Guo, Lihong Wang, Amin Beheshti, Philip S. Yu
Numerous methods, datasets, and evaluation metrics have been proposed in the literature, raising the need for a comprehensive and updated survey.
no code implementations • 30 Jun 2021 • Qiaomin Yi, Ning Yang, Philip S. Yu
First, the noise injection based methods often draw the noise from a fixed noise distribution given in advance, while in real world, the noise distributions of different users and items may differ from each other due to personal behaviors and item usage patterns.
1 code implementation • 23 Jun 2021 • Qian Li, Hao Peng, JianXin Li, Jia Wu, Yuanxing Ning, Lihong Wang, Philip S. Yu, Zheng Wang
Our approach leverages knowledge of the already extracted arguments of the same sentence to determine the role of arguments that would be difficult to decide individually.
1 code implementation • NeurIPS 2021 • Hengrui Zhang, Qitian Wu, Junchi Yan, David Wipf, Philip S. Yu
We introduce a conceptually simple yet effective model for self-supervised representation learning with graph data.
1 code implementation • 11 Jun 2021 • Ziwei Fan, Zhiwei Liu, Lei Zheng, Shen Wang, Philip S. Yu
We use Elliptical Gaussian distributions to describe items and sequences with uncertainty.
1 code implementation • 8 Jun 2021 • Siddharth Bhatia, Mohit Wadhwa, Kenji Kawaguchi, Neil Shah, Philip S. Yu, Bryan Hooi
This higher-order sketch has the useful property of preserving the dense subgraph structure (dense subgraphs in the input turn into dense submatrices in the data structure).
1 code implementation • 8 Jun 2021 • JianGuo Zhang, Kazuma Hashimoto, Yao Wan, Zhiwei Liu, Ye Liu, Caiming Xiong, Philip S. Yu
Pre-trained Transformer-based models were reported to be robust in intent classification.
no code implementations • 26 May 2021 • Xing Su, Shan Xue, Fanzhen Liu, Jia Wu, Jian Yang, Chuan Zhou, Wenbin Hu, Cecile Paris, Surya Nepal, Di Jin, Quan Z. Sheng, Philip S. Yu
A community reveals the features and connections of its members that are different from those in other communities in a network.
1 code implementation • 22 May 2021 • JianXin Li, Xingcheng Fu, Hao Peng, Senzhang Wang, Shijie Zhu, Qingyun Sun, Philip S. Yu, Lifang He
With the prevalence of graph data in real-world applications, many methods have been proposed in recent years to learn high-quality graph embedding vectors various types of graphs.
1 code implementation • 13 May 2021 • Shoujin Wang, Liang Hu, Yan Wang, Xiangnan He, Quan Z. Sheng, Mehmet A. Orgun, Longbing Cao, Francesco Ricci, Philip S. Yu
Recent years have witnessed the fast development of the emerging topic of Graph Learning based Recommender Systems (GLRS).
1 code implementation • 7 May 2021 • Gongxu Luo, JianXin Li, Jianlin Su, Hao Peng, Carl Yang, Lichao Sun, Philip S. Yu, Lifang He
Based on them, we design MinGE to directly calculate the ideal node embedding dimension for any graph.
no code implementations • 7 May 2021 • Mehrnaz Najafi, Lifang He, Philip S. Yu
Due to inevitable sensor failures, data in each view may contain error.
1 code implementation • 2 May 2021 • Zhiwei Liu, Ziwei Fan, Yu Wang, Philip S. Yu
We firstly pre-train a transformer with sequences in a reverse direction to predict prior items.
1 code implementation • 25 Apr 2021 • Yingtong Dou, Kai Shu, Congying Xia, Philip S. Yu, Lichao Sun
The majority of existing fake news detection algorithms focus on mining news content and/or the surrounding exogenous context for discovering deceptive signals; while the endogenous preference of a user when he/she decides to spread a piece of fake news or not is ignored.
Ranked #1 on
Graph Classification
on UPFD-GOS
no code implementations • 16 Apr 2021 • Yu Wang, Lifu Huang, Philip S. Yu, Lichao Sun
Membership inference attacks (MIAs) infer whether a specific data record is used for target model training.
1 code implementation • 16 Apr 2021 • Hao Peng, Ruitong Zhang, Yingtong Dou, Renyu Yang, Jingyi Zhang, Philip S. Yu
To avoid the embedding over-assimilation among different types of nodes, we employ a label-aware neural similarity measure to ascertain the most similar neighbors based on node attributes.
Ranked #2 on
Node Classification
on Amazon-Fraud
1 code implementation • 16 Apr 2021 • JianXin Li, Hao Peng, Yuwei Cao, Yingtong Dou, Hekai Zhang, Philip S. Yu, Lifang He
Furthermore, they cannot fully capture the content-based correlations between nodes, as they either do not use the self-attention mechanism or only use it to consider the immediate neighbors of each node, ignoring the higher-order neighbors.
1 code implementation • 14 Apr 2021 • Chaoyang He, Keshav Balasubramanian, Emir Ceyani, Carl Yang, Han Xie, Lichao Sun, Lifang He, Liangwei Yang, Philip S. Yu, Yu Rong, Peilin Zhao, Junzhou Huang, Murali Annavaram, Salman Avestimehr
FedGraphNN is built on a unified formulation of graph FL and contains a wide range of datasets from different domains, popular GNN models, and FL algorithms, with secure and efficient system support.
1 code implementation • NAACL 2021 • Zhongfen Deng, Hao Peng, Dongxiao He, JianXin Li, Philip S. Yu
The second one encourages the structure encoder to learn better representations with desired characteristics for all labels which can better handle label imbalance in hierarchical text classification.
no code implementations • 6 Apr 2021 • Li Sun, Zhongbao Zhang, Jiawei Zhang, Feiyang Wang, Hao Peng, Sen Su, Philip S. Yu
To model the uncertainty, we devise a hyperbolic graph variational autoencoder built upon the proposed TGNN to generate stochastic node representations of hyperbolic normal distributions.
1 code implementation • 2 Apr 2021 • Hao Peng, JianXin Li, Yangqiu Song, Renyu Yang, Rajiv Ranjan, Philip S. Yu, Lifang He
Third, we propose a streaming social event detection and evolution discovery framework for HINs based on meta-path similarity search, historical information about meta-paths, and heterogeneous DBSCAN clustering method.
no code implementations • 30 Mar 2021 • Nooshin Mojab, Vahid Noroozi, Abdullah Aleem, Manoj P. Nallabothula, Joseph Baker, Dimitri T. Azar, Mark Rosenblatt, RV Paul Chan, Darvin Yi, Philip S. Yu, Joelle A. Hallak
In this paper, we present a new multi-modal longitudinal ophthalmic imaging dataset, the Illinois Ophthalmic Database Atlas (I-ODA), with the goal of advancing state-of-the-art computer vision applications in ophthalmology, and improving upon the translatable capacity of AI based applications across different clinical settings.
no code implementations • 27 Mar 2021 • Mehrnaz Najafi, Philip S. Yu
In this paper, we propose a novel Robust Graph Convolutional Neural Networks for possible erroneous single-view or multi-view data where data may come from multiple sources.
2 code implementations • 17 Mar 2021 • Yunbo Wang, Haixu Wu, Jianjin Zhang, Zhifeng Gao, Jianmin Wang, Philip S. Yu, Mingsheng Long
This paper models these structures by presenting PredRNN, a new recurrent network, in which a pair of memory cells are explicitly decoupled, operate in nearly independent transition manners, and finally form unified representations of the complex environment.
Ranked #1 on
Video Prediction
on KTH
(Cond metric)
2 code implementations • 14 Mar 2021 • Hongsheng Hu, Zoran Salcic, Lichao Sun, Gillian Dobbie, Philip S. Yu, Xuyun Zhang
In recent years, MIAs have been shown to be effective on various ML models, e. g., classification models and generative models.
1 code implementation • 10 Mar 2021 • Zi-Yuan Hu, Jin Huang, Zhi-Hong Deng, Chang-Dong Wang, Ling Huang, Jian-Huang Lai, Philip S. Yu
Representation learning tries to learn a common low dimensional space for the representations of users and items.
1 code implementation • 4 Mar 2021 • Fanjin Zhang, Jie Tang, Xueyi Liu, Zhenyu Hou, Yuxiao Dong, Jing Zhang, Xiao Liu, Ruobing Xie, Kai Zhuang, Xu Zhang, Leyu Lin, Philip S. Yu
"Top Stories" is a novel friend-enhanced recommendation engine in WeChat, in which users can read articles based on preferences of both their own and their friends.
Graph Representation Learning
Social and Information Networks
1 code implementation • 2 Mar 2021 • Jindong Wang, Cuiling Lan, Chang Liu, Yidong Ouyang, Tao Qin, Wang Lu, Yiqiang Chen, Wenjun Zeng, Philip S. Yu
Domain generalization deals with a challenging setting where one or several different but related domain(s) are given, and the goal is to learn a model that can generalize to an unseen test domain.
3 code implementations • 27 Feb 2021 • Yixin Liu, Ming Jin, Shirui Pan, Chuan Zhou, Yu Zheng, Feng Xia, Philip S. Yu
Deep learning on graphs has attracted significant interests recently.
no code implementations • 22 Jan 2021 • Ye Liu, Yao Wan, Jian-Guo Zhang, Wenting Zhao, Philip S. Yu
In this paper, we claim that the syntactic and semantic structures among natural language are critical for non-autoregressive machine translation and can further improve the performance.
2 code implementations • 21 Jan 2021 • Yuwei Cao, Hao Peng, Jia Wu, Yingtong Dou, JianXin Li, Philip S. Yu
The complexity and streaming nature of social messages make it appealing to address social event detection in an incremental learning setting, where acquiring, preserving, and extending knowledge are major concerns.
no code implementations • 19 Jan 2021 • Jianguo Chen, Kenli Li, Keqin Li, Philip S. Yu, Zeng Zeng
We model the DL-PBS system from the perspective of CPS and use deep learning to predict the layout of bicycle parking spots and the dynamic demand of bicycle dispatching.
no code implementations • 19 Jan 2021 • Jianguo Chen, Kenli Li, Keqin Li, Philip S. Yu, Zeng Zeng
The BSDP system contains four modules: bicycle drop-off location clustering, bicycle-station graph modeling, bicycle-station location prediction, and bicycle-station layout recommendation.
no code implementations • 17 Jan 2021 • Zheng Liu, Xiaohan Li, Hao Peng, Lifang He, Philip S. Yu
EHRs contain multiple entities and relations and can be viewed as a heterogeneous graph.
1 code implementation • 8 Jan 2021 • Xiaohan Li, Mengqi Zhang, Shu Wu, Zheng Liu, Liang Wang, Philip S. Yu
Here we propose Dynamic Graph Collaborative Filtering (DGCF), a novel framework leveraging dynamic graphs to capture collaborative and sequential relations of both items and users at the same time.
no code implementations • 3 Jan 2021 • Di Jin, Zhizhi Yu, Pengfei Jiao, Shirui Pan, Dongxiao He, Jia Wu, Philip S. Yu, Weixiong Zhang
We conclude with discussions of the challenges of the field and suggestions of possible directions for future research.
no code implementations • 7 Dec 2020 • Lingjuan Lyu, Han Yu, Xingjun Ma, Chen Chen, Lichao Sun, Jun Zhao, Qiang Yang, Philip S. Yu
Besides training powerful global models, it is of paramount importance to design FL systems that have privacy guarantees and are resistant to different types of adversaries.
no code implementations • 30 Nov 2020 • Xiao Wang, Deyu Bo, Chuan Shi, Shaohua Fan, Yanfang Ye, Philip S. Yu
Heterogeneous graphs (HGs) also known as heterogeneous information networks have become ubiquitous in real-world scenarios; therefore, HG embedding, which aims to learn representations in a lower-dimension space while preserving the heterogeneous structures and semantics for downstream tasks (e. g., node/graph classification, node clustering, link prediction), has drawn considerable attentions in recent years.
1 code implementation • 4 Nov 2020 • Zhiwei Liu, Lin Meng, Fei Jiang, Jiawei Zhang, Philip S. Yu
Stacking multiple cross-hop propagation layers and locality layers constitutes the DGCF model, which models high-order CF signals adaptively to the locality of nodes and layers.
1 code implementation • COLING 2020 • Zhongfen Deng, Hao Peng, Congying Xia, JianXin Li, Lifang He, Philip S. Yu
Review rating prediction of text reviews is a rapidly growing technology with a wide range of applications in natural language processing.
2 code implementations • COLING 2020 • Hu Xu, Lei Shu, Philip S. Yu, Bing Liu
Most features in the representation of an aspect are dedicated to the fine-grained semantics of the domain (or product category) and the aspect itself, instead of carrying summarized opinions from its context.
1 code implementation • EMNLP 2020 • Jian-Guo Zhang, Kazuma Hashimoto, Wenhao Liu, Chien-Sheng Wu, Yao Wan, Philip S. Yu, Richard Socher, Caiming Xiong
Intent detection is one of the core components of goal-oriented dialog systems, and detecting out-of-scope (OOS) intents is also a practically important skill.
1 code implementation • 22 Oct 2020 • Zhiwei Liu, Xiaohan Li, Ziwei Fan, Stephen Guo, Kannan Achan, Philip S. Yu
The problem of basket recommendation~(BR) is to recommend a ranking list of items to the current basket.
no code implementations • 13 Oct 2020 • Yue Wang, Zhuo Xu, Lu Bai, Yao Wan, Lixin Cui, Qian Zhao, Edwin R. Hancock, Philip S. Yu
To verify the effectiveness of our proposed method, we conduct extensive experiments on four real-world datasets as well as compare our method with state-of-the-art methods.
1 code implementation • Findings (EMNLP) 2021 • Xuming Hu, Chenwei Zhang, Fukun Ma, Chenyao Liu, Lijie Wen, Philip S. Yu
To alleviate human efforts from obtaining large-scale annotations, Semi-Supervised Relation Extraction methods aim to leverage unlabeled data in addition to learning from limited samples.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Hoang Nguyen, Chenwei Zhang, Congying Xia, Philip S. Yu
Although recent works demonstrate that multi-level matching plays an important role in transferring learned knowledge from seen training classes to novel testing classes, they rely on a static similarity measure and overly fine-grained matching components.
no code implementations • COLING 2020 • Lichao Sun, Congying Xia, Wenpeng Yin, TingTing Liang, Philip S. Yu, Lifang He
Our studies show that mixup is a domain-independent data augmentation technique to pre-trained language models, resulting in significant performance improvement for transformer-based models.
1 code implementation • 28 Sep 2020 • He Huang, Shunta Saito, Yuta Kikuchi, Eiichi Matsumoto, Wei Tang, Philip S. Yu
Motivated by the fact that detecting these rare relations can be critical in real-world applications, this paper introduces a novel integrated framework of classification and ranking to resolve the class imbalance problem in scene graph parsing.
1 code implementation • 26 Sep 2020 • Ye Liu, Yao Wan, Lifang He, Hao Peng, Philip S. Yu
To promote the ability of commonsense reasoning for text generation, we propose a novel knowledge graph augmented pre-trained language generation model KG-BART, which encompasses the complex relations of concepts through the knowledge graph and produces more logical and natural sentences as output.
no code implementations • 25 Sep 2020 • Tao Zhang, Tianqing Zhu, Jing Li, Mengde Han, Wanlei Zhou, Philip S. Yu
A set of experiments on real-world and synthetic datasets show that our method is able to use unlabeled data to achieve a better trade-off between accuracy and discrimination.
no code implementations • 14 Sep 2020 • Tao Zhang, Tianqing Zhu, Mengde Han, Jing Li, Wanlei Zhou, Philip S. Yu
Extensive experiments show that our method is able to achieve fair semi-supervised learning, and reach a better trade-off between accuracy and fairness than fair supervised learning.
no code implementations • 30 Aug 2020 • Qingyun Sun, Hao Peng, Jian-Xin Li, Senzhang Wang, Xiangyu Dong, Liangxuan Zhao, Philip S. Yu, Lifang He
Although these attributes may change, an author's co-authors and research topics do not change frequently with time, which means that papers within a period have similar text and relation information in the academic network.
2 code implementations • 24 Aug 2020 • Youwei Liang, Dong Huang, Chang-Dong Wang, Philip S. Yu
To overcome this limitation, we propose a new multi-view graph learning framework, which for the first time simultaneously and explicitly models multi-view consistency and multi-view inconsistency in a unified objective function, through which the consistent and inconsistent parts of each single-view graph as well as the unified graph that fuses the consistent parts can be iteratively learned.
5 code implementations • 19 Aug 2020 • Yingtong Dou, Zhiwei Liu, Li Sun, Yutong Deng, Hao Peng, Philip S. Yu
Finally, the selected neighbors across different relations are aggregated together.
Ranked #4 on
Node Classification
on Amazon-Fraud
no code implementations • 16 Aug 2020 • Dayong Ye, Tianqing Zhu, Sheng Shen, Wanlei Zhou, Philip S. Yu
To the best of our knowledge, this paper is the first to apply differential privacy to the field of multi-agent planning as a means of preserving the privacy of agents for logistic-like problems.
2 code implementations • 12 Aug 2020 • Hao Peng, Jian-Xin Li, Zheng Wang, Renyu Yang, Mingzhe Liu, Mingming Zhang, Philip S. Yu, Lifang He
As a departure from prior work, Luce organizes the house data in a heterogeneous information network (HIN) where graph nodes are house entities and attributes that are important for house price valuation.
no code implementations • 6 Aug 2020 • Ye Liu, Shaika Chowdhury, Chenwei Zhang, Cornelia Caragea, Philip S. Yu
Unlike most other QA tasks that focus on linguistic understanding, HeadQA requires deeper reasoning involving not only knowledge extraction, but also complex reasoning with healthcare knowledge.
no code implementations • 5 Aug 2020 • Tianqing Zhu, Dayong Ye, Wei Wang, Wanlei Zhou, Philip S. Yu
Artificial Intelligence (AI) has attracted a great deal of attention in recent years.
2 code implementations • 2 Aug 2020 • Qian Li, Hao Peng, Jian-Xin Li, Congying Xia, Renyu Yang, Lichao Sun, Philip S. Yu, Lifang He
The last decade has seen a surge of research in this area due to the unprecedented success of deep learning.
no code implementations • 10 Jul 2020 • Longbing Cao, Qiang Yang, Philip S. Yu
Financial technology (FinTech) has been playing an increasingly critical role in driving modern economies, society, technology, and many other areas.
2 code implementations • IEEE Transactions on Knowledge and Data Engineering 2020 • Zheng Wang, Xiaojun Ye, Chaokun Wang, Jian Cui, Philip S. Yu
Network embedding, aiming to project a network into a low-dimensional space, is increasingly becoming a focus of network research.
no code implementations • 6 Jul 2020 • Di Jin, Zhizhi Yu, Dongxiao He, Carl Yang, Philip S. Yu, Jiawei Han
Graph neural networks for HIN embeddings typically adopt a hierarchical attention (including node-level and meta-path-level attentions) to capture the information from meta-path-based neighbors.
no code implementations • 4 Jul 2020 • Jianguo Chen, Kenli Li, Zhaolei Zhang, Keqin Li, Philip S. Yu
The COVID-19 pandemic caused by the SARS-CoV-2 virus has spread rapidly worldwide, leading to a global outbreak.
2 code implementations • 23 Jun 2020 • Shen Wang, Jibing Gong, Jinlong Wang, Wenzheng Feng, Hao Peng, Jie Tang, Philip S. Yu
To address this issue, we leverage both content information and context information to learn the representation of entities via graph convolution network.
1 code implementation • 10 Jun 2020 • Chen Li, Xutan Peng, Hao Peng, Jian-Xin Li, Lihong Wang, Philip S. Yu, Lifang He
Recently, graph-based algorithms have drawn much attention because of their impressive success in semi-supervised setups.
1 code implementation • 10 Jun 2020 • Yingtong Dou, Guixiang Ma, Philip S. Yu, Sihong Xie
We experiment on three large review datasets using various state-of-the-art spamming and detection strategies and show that the optimization algorithm can reliably find an equilibrial detector that can robustly and effectively prevent spammers with any mixed spamming strategies from attaining their practical goal.
no code implementations • COLING 2020 • Hu Xu, Seungwhan Moon, Honglei Liu, Pararth Shah, Bing Liu, Philip S. Yu
We study a conversational recommendation model which dynamically manages users' past (offline) preferences and current (online) requests through a structured and cumulative user memory knowledge graph, to allow for natural interactions and accurate recommendations.
no code implementations • 23 May 2020 • Ting-Ting Liang, Congying Xia, Yuyu Yin, Philip S. Yu
This paper proposes a novel neural network, joint training capsule network (JTCN), for the cold start recommendation task.
1 code implementation • 17 May 2020 • Fanzhen Liu, Shan Xue, Jia Wu, Chuan Zhou, Wenbin Hu, Cecile Paris, Surya Nepal, Jian Yang, Philip S. Yu
As communities represent similar opinions, similar functions, similar purposes, etc., community detection is an important and extremely useful tool in both scientific inquiry and data analytics.
no code implementations • SIGDIAL (ACL) 2020 • Ye Liu, Tao Yang, Zeyu You, Wei Fan, Philip S. Yu
Human tackle reading comprehension not only based on the given context itself but often rely on the commonsense beyond.
1 code implementation • 1 May 2020 • Zhiwei Liu, Yingtong Dou, Philip S. Yu, Yutong Deng, Hao Peng
In this paper, we introduce these inconsistencies and design a new GNN framework, $\mathsf{GraphConsis}$, to tackle the inconsistency problem: (1) for the context inconsistency, we propose to combine the context embeddings with node features, (2) for the feature inconsistency, we design a consistency score to filter the inconsistent neighbors and generate corresponding sampling probability, and (3) for the relation inconsistency, we learn a relation attention weights associated with the sampled nodes.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Hu Xu, Bing Liu, Lei Shu, Philip S. Yu
This paper focuses on learning domain-oriented language models driven by end tasks, which aims to combine the worlds of both general-purpose language models (such as ELMo and BERT) and domain-specific language understanding.
no code implementations • 22 Apr 2020 • Shoujin Wang, Liang Hu, Yan Wang, Xiangnan He, Quan Z. Sheng, Mehmet Orgun, Longbing Cao, Nan Wang, Francesco Ricci, Philip S. Yu
Recent years have witnessed the fast development of the emerging topic of Graph Learning based Recommender Systems (GLRS).
1 code implementation • EMNLP 2020 • Xuming Hu, Chenwei Zhang, Yusong Xu, Lijie Wen, Philip S. Yu
Open relation extraction is the task of extracting open-domain relation facts from natural language sentences.
no code implementations • 1 Mar 2020 • Lichao Sun, Yingbo Zhou, Philip S. Yu, Caiming Xiong
Ensuring the privacy of sensitive data used to train modern machine learning models is of paramount importance in many areas of practice.
1 code implementation • 11 Feb 2020 • Shao-Heng Ko, Hsu-Chao Lai, Hong-Han Shuai, De-Nian Yang, Wang-Chien Lee, Philip S. Yu
Shopping in VR malls has been regarded as a paradigm shift for E-commerce, but most of the conventional VR shopping platforms are designed for a single user.
Data Structures and Algorithms
1 code implementation • 2 Feb 2020 • Shaoxiong Ji, Shirui Pan, Erik Cambria, Pekka Marttinen, Philip S. Yu
In this survey, we provide a comprehensive review of knowledge graph covering overall research topics about 1) knowledge graph representation learning, 2) knowledge acquisition and completion, 3) temporal knowledge graph, and 4) knowledge-aware applications, and summarize recent breakthroughs and perspective directions to facilitate future research.
no code implementations • 18 Jan 2020 • Yuhui Zhao, Ning Yang, Tao Lin, Philip S. Yu
First, the existing works often assume an underlying information diffusion model, which is impractical in real world due to the complexity of information diffusion.
1 code implementation • 18 Jan 2020 • Huanrui Luo, Ning Yang, Philip S. Yu
Particularly, as the aspect preference/quality of users/items is learned automatically, HDE is able to capture the impact of aspects that are not mentioned in reviews of a user or an item.
1 code implementation • 14 Jan 2020 • Zhiwei Liu, Mengting Wan, Stephen Guo, Kannan Achan, Philip S. Yu
By defining a basket entity to represent the basket intent, we can model this problem as a basket-item link prediction task in the User-Basket-Item~(UBI) graph.
no code implementations • 31 Dec 2019 • Vahid Noroozi, Sara Bahaadini, Samira Sheikhi, Nooshin Mojab, Philip S. Yu
There has been a growing concern about the fairness of decision-making systems based on machine learning.
no code implementations • 25 Dec 2019 • Guixiang Ma, Nesreen K. Ahmed, Theodore L. Willke, Philip S. Yu
In many domains where data are represented as graphs, learning a similarity metric among graphs is considered a key problem, which can further facilitate various learning tasks, such as classification, clustering, and similarity search.
1 code implementation • 8 Dec 2019 • Zhiyu Yao, Yunbo Wang, Jianmin Wang, Philip S. Yu, Mingsheng Long
This paper introduces video domain generalization where most video classification networks degenerate due to the lack of exposure to the target domains of divergent distributions.
no code implementations • 6 Dec 2019 • Shaika Chowdhury, Chenwei Zhang, Philip S. Yu, Yuan Luo
Distributed representations of medical concepts have been used to support downstream clinical tasks recently.