no code implementations • 22 Nov 2024 • Zewen Long, Shu Wu, Qiang Liu, Liang Wang
With the advancement of large language models (LLMs), researchers have explored various methods to optimally leverage their comprehension and generation capabilities in sequential recommendation scenarios.
no code implementations • 16 Nov 2024 • Yu Peng, Zewen Long, Fangming Dong, Congyi Li, Shu Wu, Kai Chen
In this paper, we introduce two novel jailbreak methods based on mismatched generalization: natural language games and custom language games, both of which effectively bypass the safety mechanisms of LLMs, with various kinds and different variants, making them hard to defend and leading to high attack rates.
no code implementations • 2 Nov 2024 • Qiang Liu, Shaozhen Liu, Xin Sun, Shu Wu, Liang Wang
We attribute this issue to the imbalance between the abundance of tunable parameters and the scarcity of labeled molecules, and the lack of contextual perceptiveness in the encoders.
no code implementations • 2 Nov 2024 • Shu Wu, Qiang Liu, Yanqiao Zhu, Xiang Tao, Mengdi Zhang, Liang Wang
To address the aforementioned issues, this paper presents a novel Bi-level Graph Structure Learning (BiGSL) for next POI recommendation.
1 code implementation • 21 Oct 2024 • Han Huang, Yuqi Huo, Zijia Zhao, Haoyu Lu, Shu Wu, Bingning Wang, Qiang Liu, WeiPeng Chen, Liang Wang
A critical factor in training MLLMs is the quality of image-text pairs within multimodal pretraining datasets.
no code implementations • 10 Oct 2024 • Mengqi Zhang, Xiaotian Ye, Qiang Liu, Pengjie Ren, Shu Wu, Zhumin Chen
Knowledge editing has been proposed as an effective method for updating and correcting the internal knowledge of Large Language Models (LLMs).
no code implementations • 2 Sep 2024 • Dingshuo Chen, ZHIXUN LI, Yuyan Ni, Guibin Zhang, Ding Wang, Qiang Liu, Shu Wu, Jeffrey Xu Yu, Liang Wang
Therefore, we propose a Molecular data Pruning framework for enhanced Generalization (MolPeg), which focuses on the source-free data pruning scenario, where data pruning is applied with pretrained models.
no code implementations • 22 Aug 2024 • Mengqi Zhang, Bowen Fang, Qiang Liu, Pengjie Ren, Shu Wu, Zhumin Chen, Liang Wang
Building on the validated hypothesis, we propose a novel knowledge editing method that incorporates a Knowledge Erasure mechanism for Large language model Editing (KELE).
no code implementations • 8 Aug 2024 • Xin Sun, Qiang Liu, Shu Wu, Zilei Wang, Liang Wang
This paper addresses the challenge of out-of-distribution (OOD) generalization in graph machine learning, a field rapidly advancing yet grappling with the discrepancy between source and target data distributions.
no code implementations • 26 Jul 2024 • Jinghao Zhang, Guofan Liu, Qiang Liu, Shu Wu, Liang Wang
To address these issues, we propose a Counterfactual Knowledge Distillation method that could solve the imbalance problem and make the best use of all modalities.
no code implementations • 17 Jul 2024 • Haisong Gong, Huanhuan Ma, Qiang Liu, Shu Wu, Liang Wang
These keywords serve as a guide to extract and summarize critical information into abstracted evidence.
no code implementations • 11 Jun 2024 • Yimeng Gu, Mengqi Zhang, Ignacio Castro, Shu Wu, Gareth Tyson
In order to effectively adapt the detection model to unlabeled news topics or agencies, we propose ConDA-TTA (Contrastive Domain Adaptation with Test-Time Adaptation) which applies contrastive learning and maximum mean discrepancy (MMD) to learn domain-invariant features.
no code implementations • 7 Jun 2024 • Huanhuan Ma, Jinghao Zhang, Qiang Liu, Shu Wu, Liang Wang
By employing latent variables for phrase-level predictions, the final prediction of the image-caption pair can be aggregated using logical rules.
no code implementations • 24 Apr 2024 • Xiang Tao, Qiang Liu, Shu Wu, Liang Wang
The model learns semantic evolvement information of events by capturing local semantic changes and global semantic evolvement information through specific graph autoencoder and reconstruction strategies.
no code implementations • 26 Mar 2024 • Xiang Tao, Mingqing Zhang, Qiang Liu, Shu Wu, Liang Wang
This method models the propagation of news in the form of a propagation graph, and builds propagation graph test-time adaptation framework, enhancing the model's adaptability and robustness when facing OOD problems.
1 code implementation • 12 Mar 2024 • Han Huang, Haitian Zhong, Tao Yu, Qiang Liu, Shu Wu, Liang Wang, Tieniu Tan
Compared to this, editing Large Vision-Language Models (LVLMs) faces extra challenges from diverse data modalities and complicated model components, and data for LVLMs editing are limited.
no code implementations • 29 Feb 2024 • Jiajun Zhang, ZHIXUN LI, Qiang Liu, Shu Wu, Liang Wang
One of the unique challenges for fake news detection on social media is how to detect fake news on future events.
no code implementations • 22 Feb 2024 • Yuwei Xia, Ding Wang, Qiang Liu, Liang Wang, Shu Wu, XiaoYu Zhang
Temporal Knowledge Graph (TKG) forecasting aims to predict future facts based on given histories.
no code implementations • 21 Feb 2024 • Mengqi Zhang, Xiaotian Ye, Qiang Liu, Pengjie Ren, Shu Wu, Zhumin Chen
Large language models (LLMs) are pivotal in advancing natural language processing (NLP) tasks, yet their efficacy is hampered by inaccuracies and outdated knowledge.
1 code implementation • 20 Feb 2024 • Haisong Gong, Qiang Liu, Shu Wu, Liang Wang
In this work, we propose the Text-Guided Molecule Generation with Diffusion Language Model (TGM-DLM), a novel approach that leverages diffusion models to address the limitations of autoregressive methods.
Ranked #8 on Text-based de novo Molecule Generation on ChEBI-20
Language Modelling Text-based de novo Molecule Generation +1
1 code implementation • 20 Feb 2024 • Haisong Gong, Weizhi Xu, Shu Wu, Qiang Liu, Liang Wang
To address this, we propose a novel word-level Heterogeneous-graph-based model for Fact Checking over unstructured and structured information, namely HeterFC.
no code implementations • 18 Feb 2024 • Jinghao Zhang, YuTing Liu, Qiang Liu, Shu Wu, Guibing Guo, Liang Wang
Recently, the powerful large language models (LLMs) have been instrumental in propelling the progress of recommender systems (RS).
1 code implementation • 18 Feb 2024 • Junfei Wu, Qiang Liu, Ding Wang, Jinghao Zhang, Shu Wu, Liang Wang, Tieniu Tan
In this work, we adopt the intuition that the LVLM tends to respond logically consistently for existent objects but inconsistently for hallucinated objects.
1 code implementation • 11 Feb 2024 • Xiang Tao, Qiang Liu, Shu Wu, Liang Wang
Based on our theoretical analysis, we further identify the limitations of the GraphMAE from the perspectives of alignment and uniformity, which have been considered as two key properties of high-quality representations in GCL.
no code implementations • 6 Feb 2024 • Qiang Liu, Xiang Tao, Junfei Wu, Shu Wu, Liang Wang
In this work, we investigate to use Large Language Models (LLMs) for rumor detection on social media.
1 code implementation • 15 Oct 2023 • Huanhuan Ma, Weizhi Xu, Yifan Wei, Liuji Chen, Qiang Liu, Shu Wu, Liang Wang
Each instance is accompanied by a veracity label and an explanation that outlines the reasoning path supporting the veracity classification.
1 code implementation • NeurIPS 2023 • ZHIXUN LI, Xin Sun, Yifan Luo, Yanqiao Zhu, Dingshuo Chen, Yingtao Luo, Xiangxin Zhou, Qiang Liu, Shu Wu, Liang Wang, Jeffrey Xu Yu
To fill this gap, we systematically analyze the performance of GSL in different scenarios and develop a comprehensive Graph Structure Learning Benchmark (GSLB) curated from 20 diverse graph datasets and 16 distinct GSL algorithms.
2 code implementations • NeurIPS 2023 • Dingshuo Chen, Yanqiao Zhu, Jieyu Zhang, Yuanqi Du, ZHIXUN LI, Qiang Liu, Shu Wu, Liang Wang
Molecular Representation Learning (MRL) has emerged as a powerful tool for drug and materials discovery in a variety of tasks such as virtual screening and inverse design.
no code implementations • 14 Sep 2023 • Xiangzhu Meng, Wei Wei, Qiang Liu, Shu Wu, Liang Wang
Motivated by the related medical findings on functional connectivites, TiBGL proposes template-induced brain graph learning to extract template brain graphs for all groups.
no code implementations • 14 Sep 2023 • Xiangzhu Meng, Qiang Liu, Shu Wu, Liang Wang
In recent years, functional magnetic resonance imaging (fMRI) has been widely utilized to diagnose neurological disease, by exploiting the region of interest (RoI) nodes as well as their connectivities in human brain.
no code implementations • 25 Jun 2023 • Jinghao Zhang, Qiang Liu, Shu Wu, Liang Wang
Even worse, the strong statistical correlation might mislead models to learn the spurious preference towards inconsequential modalities.
no code implementations • 25 Apr 2023 • Qiang Liu, Junfei Wu, Shu Wu, Liang Wang
Then, DAL reversely optimizes news-aspect and evidence-aspect debiasing discriminators to mitigate the impact of news and evidence content biases.
no code implementations • 12 Apr 2023 • Qiang Liu, Zhaocheng Liu, Zhenxi Zhu, Shu Wu, Liang Wang
However, none of existing multi-interest recommendation models consider the Out-Of-Distribution (OOD) generalization problem, in which interest distribution may change.
no code implementations • 2 Feb 2023 • Yuwei Xia, Mengqi Zhang, Qiang Liu, Shu Wu, Xiao-Yu Zhang
Most existing works focus on exploring evolutionary information in history to obtain effective temporal embeddings for entities and relations, but they ignore the variation in evolution patterns of facts, which makes them struggle to adapt to future data with different evolution patterns.
no code implementations • 22 Oct 2022 • ZHIXUN LI, Dingshuo Chen, Qiang Liu, Shu Wu
In this paper, we argue that the performance degradation is mainly attributed to the inconsistency between topology and attribute.
1 code implementation • 11 Oct 2022 • Junfei Wu, Weizhi Xu, Qiang Liu, Shu Wu, Liang Wang
Comprehensive experiments have demonstrated the superiority of GETRAL over the state-of-the-arts and validated the efficacy of semantic mining with graph structure and contrastive learning.
1 code implementation • 29 Sep 2022 • Yanqiao Zhu, Dingshuo Chen, Yuanqi Du, Yingze Wang, Qiang Liu, Shu Wu
Molecular pretraining, which learns molecular representations over massive unlabeled data, has become a prominent paradigm to solve a variety of tasks in computational chemistry and drug discovery.
1 code implementation • Conference 2022 • Fenyu Hu, Zeyu Cui, Shu Wu, Qiang Liu, Jinlin Wu, Liang Wang & Tieniu Tan
Graph Neural Networks (GNNs) are powerful to learn representation of graph-structured data, which fuse both attributive and topological information.
no code implementations • 1 Jun 2022 • Qiang Liu, Yingtao Luo, Shu Wu, Zhen Zhang, Xiangnan Yue, Hong Jin, Liang Wang
Accordingly, we for the first time propose to model the biased credit scoring data with Multi-Task Learning (MTL).
no code implementations • 13 Mar 2022 • Yanqiao Zhu, Yuanqi Du, Yinkai Wang, Yichen Xu, Jieyu Zhang, Qiang Liu, Shu Wu
In this paper, we conduct a comprehensive review on the existing literature of deep graph generation from a variety of emerging methods to its wide application areas.
1 code implementation • 18 Jan 2022 • Weizhi Xu, Junfei Wu, Qiang Liu, Shu Wu, Liang Wang
In this paper, we focus on the evidence-based fake news detection, where several evidences are utilized to probe the veracity of news (i. e., a claim).
no code implementations • 15 Nov 2021 • Qiyue Yin, Jun Yang, Kaiqi Huang, Meijing Zhao, Wancheng Ni, Bin Liang, Yan Huang, Shu Wu, Liang Wang
Through this survey, we 1) compare the main difficulties among different kinds of games and the corresponding techniques utilized for achieving professional human level AIs; 2) summarize the mainstream frameworks and techniques that can be properly relied on for developing AIs for complex human-computer gaming; 3) raise the challenges or drawbacks of current techniques in the successful AIs; and 4) try to point out future trends in human-computer gaming AIs.
1 code implementation • 1 Nov 2021 • Jinghao Zhang, Yanqiao Zhu, Qiang Liu, Mengqi Zhang, Shu Wu, Liang Wang
Although having access to multiple modalities might allow us to capture rich information, we argue that the simple coarse-grained fusion by linear combination or concatenation in previous work is insufficient to fully understand content information and item relationships. To this end, we propose a latent structure MIning with ContRastive mOdality fusion method (MICRO for brevity).
1 code implementation • 14 Oct 2021 • Qilong Yan, Yufeng Zhang, Qiang Liu, Shu Wu, Liang Wang
User profiling has long been an important problem that investigates user interests in many real applications.
2 code implementations • 2 Sep 2021 • Yanqiao Zhu, Yichen Xu, Qiang Liu, Shu Wu
We envision this work to provide useful empirical evidence of effective GCL algorithms and offer several insights for future research.
no code implementations • 31 Aug 2021 • Yanqiao Zhu, Yichen Xu, Hejie Cui, Carl Yang, Qiang Liu, Shu Wu
Recently, heterogeneous Graph Neural Networks (GNNs) have become a de facto model for analyzing HGs, while most of them rely on a relative large number of labeled data.
no code implementations • 16 Aug 2021 • Mengqi Zhang, Yanqiao Zhu, Qiang Liu, Shu Wu, Liang Wang
In our work, different views can be obtained based on the various relations among nodes.
no code implementations • 15 Aug 2021 • Qiang Liu, Yanqiao Zhu, Zhaocheng Liu, Yufeng Zhang, Shu Wu
To train high-performing models with the minimal annotation cost, active learning is proposed to select and label the most informative samples, yet it is still challenging to measure informativeness of samples used in DNNs.
no code implementations • 10 Aug 2021 • Liping Wang, Fenyu Hu, Shu Wu, Liang Wang
These methods embed users and items in Euclidean space, and perform graph convolution on user-item interaction graphs.
no code implementations • 10 Aug 2021 • Liping Wang, Fenyu Hu, Shu Wu, Liang Wang
Graph Neural Networks (GNNs) have achieved great success among various domains.
1 code implementation • 25 May 2021 • Shu Wu, Zekun Li, Yunyue Su, Zeyu Cui, XiaoYu Zhang, Liang Wang
To solve the problems, we propose a novel approach, Graph Factorization Machine (GraphFM), by naturally representing features in the graph structure.
no code implementations • 22 May 2021 • Weiyu Guo, Zhijiang Yang, Shu Wu, Fu Chen
Experimental results obtained on real-world enterprise datasets verify that the proposed approach achieves higher performance than conventional methods, and provides insights into individual rating results and the reliability of model training.
no code implementations • 26 Apr 2021 • Yinjiang Cai, Zeyu Cui, Shu Wu, Zhen Lei, Xibo Ma
Our proposed Co-occurrence based Enhanced Representation model (CER) learns the scoring function by a deep neural network with the attentive user representation and fusion of raw representation and enhanced representation of target item as input.
1 code implementation • 19 Apr 2021 • Jinghao Zhang, Yanqiao Zhu, Qiang Liu, Shu Wu, Shuhui Wang, Liang Wang
To be specific, in the proposed LATTICE model, we devise a novel modality-aware structure learning layer, which learns item-item structures for each modality and aggregates multiple modalities to obtain latent item graphs.
1 code implementation • 15 Apr 2021 • Mengqi Zhang, Shu Wu, Xueli Yu, Qiang Liu, Liang Wang
We propose a new method named Dynamic Graph Neural Network for Sequential Recommendation (DGSR), which connects different user sequences through a dynamic graph structure, exploring the interactive behavior of users and items with time and order information.
no code implementations • 7 Apr 2021 • Zeyu Cui, Zekun Li, Shu Wu, XiaoYu Zhang, Qiang Liu, Liang Wang, Mengmeng Ai
We naturally generalizes the embedding propagation scheme of GCN to dynamic setting in an efficient manner, which is to propagate the change along the graph to update node embeddings.
no code implementations • 29 Mar 2021 • Fenyu Hu, Liping Wang, Shu Wu, Liang Wang, Tieniu Tan
Graph classification is a challenging research problem in many applications across a broad range of domains.
1 code implementation • journal 2021 • Fenyu Hu, Liping Wang, Qiang Liu, Shu Wu, Liang Wang, Tieniu Tan
Graph classification is a challenging research problem in many applications across a broad range of domains.
no code implementations • 4 Mar 2021 • Yanqiao Zhu, Weizhi Xu, Jinghao Zhang, Yuanqi Du, Jieyu Zhang, Qiang Liu, Carl Yang, Shu Wu
Specifically, we first formulate a general pipeline of GSL and review state-of-the-art methods classified by the way of modeling graph structures, followed by applications of GSL across domains.
1 code implementation • 22 Feb 2021 • Xueli Yu, Weizhi Xu, Zeyu Cui, Shu Wu, Liang Wang
In addition, due to the complexity and scale of the document collections, it is considerable to explore the different grain-sized hierarchical matching signals at a more general level.
1 code implementation • 28 Jan 2021 • Yufeng Zhang, Jinghao Zhang, Zeyu Cui, Shu Wu, Liang Wang
To retrieve more relevant, appropriate and useful documents given a query, finding clues about that query through the text is crucial.
2 code implementations • 11 Jan 2021 • Yichen Xu, Yanqiao Zhu, Feng Yu, Qiang Liu, Shu Wu
To better model complex feature interaction, in this paper we propose a novel DisentanglEd Self-atTentIve NEtwork (DESTINE) framework for CTR prediction that explicitly decouples the computation of unary feature importance from pairwise interaction.
1 code implementation • 8 Jan 2021 • Xiaohan Li, Mengqi Zhang, Shu Wu, Zheng Liu, Liang Wang, Philip S. Yu
Here we propose Dynamic Graph Collaborative Filtering (DGCF), a novel framework leveraging dynamic graphs to capture collaborative and sequential relations of both items and users at the same time.
no code implementations • 10 Dec 2020 • Yujia Zheng, Siyi Liu, Zekun Li, Shu Wu
As there is generally no side information in the setting of sequential recommendation task, previous cold-start methods could not be applied when only user-item interactions are available.
no code implementations • 13 Nov 2020 • Zekun Li, Yujia Zheng, Shu Wu, XiaoYu Zhang, Liang Wang
In this work, we propose to model user-item interactions as a heterogeneous graph which consists of not only user-item edges indicating their interaction but also user-user edges indicating their similarity.
no code implementations • 30 Oct 2020 • Yanqiao Zhu, Weizhi Xu, Qiang Liu, Shu Wu
To this end, we present a minimax selection scheme that explicitly harnesses neighborhood information and discover homophilous subgraphs to facilitate active selection.
1 code implementation • 27 Oct 2020 • Yanqiao Zhu, Yichen Xu, Feng Yu, Qiang Liu, Shu Wu, Liang Wang
On the node attribute level, we corrupt node features by adding more noise to unimportant node features, to enforce the model to recognize underlying semantic information.
1 code implementation • 21 Sep 2020 • Yujia Zheng, Siyi Liu, Zekun Li, Shu Wu
These item transitions include potential collaborative information and reflect similar behavior patterns, which we assume may help with the recommendation for the target session.
Ranked #6 on Session-Based Recommendations on Diginetica
no code implementations • 3 Sep 2020 • Yanqiao Zhu, Yichen Xu, Feng Yu, Shu Wu, Liang Wang
In CAGNN, we perform clustering on the node embeddings and update the model parameters by predicting the cluster assignments.
1 code implementation • 27 Aug 2020 • Johannes Kopf, Kevin Matzen, Suhib Alsisan, Ocean Quigley, Francis Ge, Yangming Chong, Josh Patterson, Jan-Michael Frahm, Shu Wu, Matthew Yu, Peizhao Zhang, Zijian He, Peter Vajda, Ayush Saraf, Michael Cohen
3D photos are static in time, like traditional photos, but are displayed with interactive parallax on mobile or desktop screens, as well as on Virtual Reality devices, where viewing it also includes stereo.
no code implementations • 17 Aug 2020 • Zeyu Cui, Feng Yu, Shu Wu, Qiang Liu, Liang Wang
In this way, the items are represented at the attribute level, which can provide fine-grained information of items in recommendation.
no code implementations • 29 Jun 2020 • Shu Wu, Feng Yu, Xueli Yu, Qiang Liu, Liang Wang, Tieniu Tan, Jie Shao, Fan Huang
The CTR (Click-Through Rate) prediction plays a central role in the domain of computational advertising and recommender systems.
Ranked #33 on Click-Through Rate Prediction on Criteo
3 code implementations • 7 Jun 2020 • Yanqiao Zhu, Yichen Xu, Feng Yu, Qiang Liu, Shu Wu, Liang Wang
Moreover, our unsupervised method even surpasses its supervised counterparts on transductive tasks, demonstrating its great potential in real-world applications.
Ranked #1 on Node Classification on DBLP
1 code implementation • 6 May 2020 • Feng Yu, Yanqiao Zhu, Qiang Liu, Shu Wu, Liang Wang, Tieniu Tan
However, these methods compress a session into one fixed representation vector without considering the target items to be predicted.
Ranked #3 on Session-Based Recommendations on yoochoose1
1 code implementation • ACL 2020 • Yufeng Zhang, Xueli Yu, Zeyu Cui, Shu Wu, Zhongzhen Wen, Liang Wang
We first build individual graphs for each document and then use GNN to learn the fine-grained word representations based on their local structures, which can also effectively produce embeddings for unseen words in the new document.
1 code implementation • 1 Jan 2020 • Feng Yu, Zhaocheng Liu, Qiang Liu, Haoli Zhang, Shu Wu, Liang Wang
IM is an efficient and exact implementation of high-order FM, whose time complexity linearly grows with the order of interactions and the number of feature fields.
Ranked #2 on Click-Through Rate Prediction on KKBox
no code implementations • CIKM 2020 • Feng Yu, Zhaocheng Liu, Qiang Liu, Haoli Zhang, Shu Wu, Liang Wang
IM is an efficient and exact implementation of high-order FM, whose time complexity linearly grows with the order of interactions and the number of feature fields.
no code implementations • 26 Nov 2019 • Yanbei Liu, Xiao Wang, Shu Wu, Zhitao Xiao
In this paper, we propose a novel Independence Promoted Graph Disentangled Networks (IPGDN) to learn disentangled node representation while enhancing the independence among node representations.
no code implementations • 11 Nov 2019 • Qiang Liu, Shu Wu, Liang Wang
For modeling users' demands on different categories of items, the problem can be formulated as recommendation with contextual and sequential information.
1 code implementation • 5 Nov 2019 • Fenyu Hu, Yanqiao Zhu, Shu Wu, Weiran Huang, Liang Wang, Tieniu Tan
Then, in order to better capture the complicated non-linearity of graph data, we present a novel GraphAIR framework which models the neighborhood interaction in addition to neighborhood aggregation.
3 code implementations • 20 Oct 2019 • Mengqi Zhang, Shu Wu, Meng Gao, Xin Jiang, Ke Xu, Liang Wang
The other is Dot-Product Attention mechanism, which draws on the Transformer net to explicitly model the effect of historical sessions on the current session.
5 code implementations • 12 Oct 2019 • Zekun Li, Zeyu Cui, Shu Wu, Xiao-Yu Zhang, Liang Wang
The key of this task is to model feature interactions among different feature fields.
Ranked #10 on Click-Through Rate Prediction on Avazu
1 code implementation • 31 Jul 2019 • Zekun Li, Zeyu Cui, Shu Wu, Xiao-Yu Zhang, Liang Wang
To achieve the alignment, we minimize the distances between distributions with unsupervised adversarial learning, and also the distances between some annotated compatible items which play the role of anchor points to help align.
no code implementations • 26 Feb 2019 • Lu Bai, Lixin Cui, Shu Wu, Yuhang Jiao, Edwin R. Hancock
In this paper, we develop a new aligned vertex convolutional network model to learn multi-scale local-level vertex features for graph classification.
1 code implementation • 21 Feb 2019 • Zeyu Cui, Zekun Li, Shu Wu, Xiao-Yu Zhang, Liang Wang
In this paper, we aim to investigate a practical problem of fashion recommendation by answering the question "which item should we select to match with the given fashion items and form a compatible outfit".
Ranked #1 on Recommendation Systems on Polyvore (Accuracy metric)
1 code implementation • 13 Feb 2019 • Fenyu Hu, Yanqiao Zhu, Shu Wu, Liang Wang, Tieniu Tan
Graph convolutional networks (GCNs) have been successfully applied in node classification tasks of network mining.
8 code implementations • 1 Nov 2018 • Shu Wu, Yuyuan Tang, Yanqiao Zhu, Liang Wang, Xing Xie, Tieniu Tan
To obtain accurate item embedding and take complex transitions of items into account, we propose a novel method, i. e. Session-based Recommendation with Graph Neural Networks, SR-GNN for brevity.
Ranked #1 on Session-Based Recommendations on Gowalla
1 code implementation • 14 Nov 2017 • Qiang Cui, Shu Wu, Yan Huang, Liang Wang
We fuse the current hidden state and a contextual hidden state built by the attention mechanism, which leads to a more suitable user's overall interest.
no code implementations • 29 Sep 2016 • Qiang Liu, Shu Wu, Feng Yu, Liang Wang, Tieniu Tan
In this paper, we propose a novel representation learning method, Information Credibility Evaluation (ICE), to learn representations of information credibility on social media.
no code implementations • 19 Sep 2016 • Qiang Liu, Shu Wu, Diyi Wang, Zhaokang Li, Liang Wang
Recently, Recurrent Neural Networks (RNN) based methods have been successfully applied in several sequential modeling tasks.
no code implementations • 21 Jul 2016 • Kaiye Wang, Qiyue Yin, Wei Wang, Shu Wu, Liang Wang
To speed up the cross-modal retrieval, a number of binary representation learning methods are proposed to map different modalities of data into a common Hamming space.
4 code implementations • 1 Jan 2015 • Qiang Liu, Feng Yu, Shu Wu, Liang Wang
The explosion in online advertisement urges to better estimate the click prediction of ads.
no code implementations • CIKM 2015 • Qiang Liu, Feng Yu, Shu Wu, Liang Wang
The explosion in online advertisement urges to better estimate the click prediction of ads.