no code implementations • ICML 2020 • Meng Qu, Tianyu Gao, Louis-Pascal Xhonneux, Jian Tang
This paper studies few-shot relation extraction, which aims at predicting the relation for a pair of entities in a sentence by training with a few labeled examples in each relation.
no code implementations • EMNLP 2020 • Woojeong Jin, Meng Qu, Xisen Jin, Xiang Ren
The task becomes more challenging on temporal knowledge graphs, where each fact is associated with a timestamp.
no code implementations • 6 Feb 2024 • Xiangru Tang, Qiao Jin, Kunlun Zhu, Tongxin Yuan, Yichi Zhang, Wangchunshu Zhou, Meng Qu, Yilun Zhao, Jian Tang, Zhuosheng Zhang, Arman Cohan, Zhiyong Lu, Mark Gerstein
Intelligent agents powered by large language models (LLMs) have demonstrated substantial promise in autonomously conducting experiments and facilitating scientific discoveries across various disciplines.
1 code implementation • 2 Oct 2023 • Jianan Zhao, Le Zhuo, Yikang Shen, Meng Qu, Kai Liu, Michael Bronstein, Zhaocheng Zhu, Jian Tang
Furthermore, GraphText paves the way for interactive graph reasoning, allowing both humans and LLMs to communicate with the model seamlessly using natural language.
no code implementations • 23 Apr 2023 • Wei Ju, Xiao Luo, Meng Qu, Yifan Wang, Chong Chen, Minghua Deng, Xian-Sheng Hua, Ming Zhang
The two twin modules collaborate with each other by exchanging instance similarity knowledge to fully explore the structure information of both labeled and unlabeled data.
2 code implementations • 26 Oct 2022 • Jianan Zhao, Meng Qu, Chaozhuo Li, Hao Yan, Qian Liu, Rui Li, Xing Xie, Jian Tang
In this paper, we propose an efficient and effective solution to learning on large text-attributed graphs by fusing graph structure and language learning with a variational Expectation-Maximization (EM) framework, called GLEM.
Ranked #1 on Node Property Prediction on ogbn-papers100M
no code implementations • 21 May 2022 • Wei Ju, Junwei Yang, Meng Qu, Weiping Song, Jianhao Shen, Ming Zhang
This problem is typically solved by using graph neural networks (GNNs), which yet rely on a large number of labeled graphs for training and are unable to leverage unlabeled graphs.
1 code implementation • ICLR 2022 • Meng Qu, Huiyu Cai, Jian Tang
This problem has been extensively studied with graph neural networks (GNNs) by learning effective node representations, as well as traditional structured prediction methods for modeling the structured output of node labels, e. g., conditional random fields (CRFs).
1 code implementation • 22 Feb 2022 • Shengchao Liu, Meng Qu, Zuobai Zhang, Huiyu Cai, Jian Tang
However, in contrast to other domains, the performance of multi-task learning in drug discovery is still not satisfying as the number of labeled data for each task is too limited, which calls for additional data to complement the data scarcity.
1 code implementation • 16 Feb 2022 • Zhaocheng Zhu, Chence Shi, Zuobai Zhang, Shengchao Liu, Minghao Xu, Xinyu Yuan, Yangtian Zhang, Junkun Chen, Huiyu Cai, Jiarui Lu, Chang Ma, Runcheng Liu, Louis-Pascal Xhonneux, Meng Qu, Jian Tang
However, lacking domain knowledge (e. g., which tasks to work on), standard benchmarks and data preprocessing pipelines are the main obstacles for machine learning researchers to work in this domain.
no code implementations • NeurIPS 2021 • Minghao Xu, Meng Qu, Bingbing Ni, Jian Tang
We further propose an efficient and effective algorithm for inference based on mean-field variational inference, in which we first provide a warm initialization by independently predicting the objects and their relations according to the current model, followed by a few iterations of relational reasoning.
no code implementations • NeurIPS Workshop AI4Scien 2021 • Shengchao Liu, Meng Qu, Zuobai Zhang, Huiyu Cai, Jian Tang
In this paper, we study multi-task learning for molecule property prediction in a different setting, where a relation graph between different tasks is available.
no code implementations • 30 Oct 2020 • Prateek Gupta, Tegan Maharaj, Martin Weiss, Nasim Rahaman, Hannah Alsdurf, Abhinav Sharma, Nanor Minoyan, Soren Harnois-Leblanc, Victor Schmidt, Pierre-Luc St. Charles, Tristan Deleu, Andrew Williams, Akshay Patel, Meng Qu, Olexa Bilaniuk, Gaétan Marceau Caron, Pierre Luc Carrier, Satya Ortiz-Gagné, Marc-Andre Rousseau, David Buckeridge, Joumana Ghosn, Yang Zhang, Bernhard Schölkopf, Jian Tang, Irina Rish, Christopher Pal, Joanna Merckx, Eilif B. Muller, Yoshua Bengio
The rapid global spread of COVID-19 has led to an unprecedented demand for effective methods to mitigate the spread of the disease, and various digital contact tracing (DCT) methods have emerged as a component of the solution.
1 code implementation • ICLR 2021 • Yoshua Bengio, Prateek Gupta, Tegan Maharaj, Nasim Rahaman, Martin Weiss, Tristan Deleu, Eilif Muller, Meng Qu, Victor Schmidt, Pierre-Luc St-Charles, Hannah Alsdurf, Olexa Bilanuik, David Buckeridge, Gáetan Marceau Caron, Pierre-Luc Carrier, Joumana Ghosn, Satya Ortiz-Gagne, Chris Pal, Irina Rish, Bernhard Schölkopf, Abhinav Sharma, Jian Tang, Andrew Williams
Predictions are used to provide personalized recommendations to the individual via an app, as well as to send anonymized messages to the individual's contacts, who use this information to better predict their own infectiousness, an approach we call proactive contact tracing (PCT).
2 code implementations • ICLR 2021 • Meng Qu, Junkun Chen, Louis-Pascal Xhonneux, Yoshua Bengio, Jian Tang
Then in the E-step, we select a set of high-quality rules from all generated rules with both the rule generator and reasoning predictor via posterior inference; and in the M-step, the rule generator is updated with the rules selected in the E-step.
1 code implementation • 5 Jul 2020 • Meng Qu, Tianyu Gao, Louis-Pascal A. C. Xhonneux, Jian Tang
To more effectively generalize to new relations, in this paper we study the relationships between different relations and propose to leverage a global relation graph.
1 code implementation • NeurIPS 2020 • Shengding Hu, Zheng Xiong, Meng Qu, Xingdi Yuan, Marc-Alexandre Côté, Zhiyuan Liu, Jian Tang
Graph neural networks (GNNs) have been attracting increasing popularity due to their simplicity and effectiveness in a variety of fields.
no code implementations • 18 May 2020 • Hannah Alsdurf, Edmond Belliveau, Yoshua Bengio, Tristan Deleu, Prateek Gupta, Daphne Ippolito, Richard Janda, Max Jarvie, Tyler Kolody, Sekoul Krastev, Tegan Maharaj, Robert Obryk, Dan Pilat, Valerie Pisano, Benjamin Prud'homme, Meng Qu, Nasim Rahaman, Irina Rish, Jean-Francois Rousseau, Abhinav Sharma, Brooke Struck, Jian Tang, Martin Weiss, Yun William Yu
Manual contact tracing of Covid-19 cases has significant challenges that limit the ability of public health authorities to minimize community infections.
1 code implementation • ICML 2020 • Louis-Pascal A. C. Xhonneux, Meng Qu, Jian Tang
The key idea is how to characterise the continuous dynamics of node representations, i. e. the derivatives of node representations, w. r. t.
no code implementations • 25 Sep 2019 • Woojeong Jin, He Jiang, Meng Qu, Tong Chen, Changlin Zhang, Pedro Szekely, Xiang Ren
We present Recurrent Event Network (RE-Net), a novel autoregressive architecture for modeling temporal sequences of multi-relational graphs (e. g., temporal knowledge graph), which can perform sequential, global structure inference over future time stamps to predict new events.
no code implementations • 25 Sep 2019 • Shengding Hu, Meng Qu, Zhiyuan Liu, Jian Tang
Moreover, we also studied how to learn a universal policy for labeling nodes on graphs with multiple training graphs and then transfer the learned policy to unseen graphs.
no code implementations • 25 Sep 2019 • Vikas Verma, Meng Qu, Alex Lamb, Yoshua Bengio, Juho Kannala, Jian Tang
We present GraphMix, a regularization technique for Graph Neural Network based semi-supervised object classification, leveraging the recent advances in the regularization of classical deep neural networks.
1 code implementation • 25 Sep 2019 • Vikas Verma, Meng Qu, Kenji Kawaguchi, Alex Lamb, Yoshua Bengio, Juho Kannala, Jian Tang
We present GraphMix, a regularization method for Graph Neural Network based semi-supervised object classification, whereby we propose to train a fully-connected network jointly with the graph neural network via parameter sharing and interpolation-based regularization.
Ranked #1 on Node Classification on Pubmed random partition
2 code implementations • IJCNLP 2019 • Cong Fu, Tong Chen, Meng Qu, Woojeong Jin, Xiang Ren
We propose a novel reinforcement learning framework to train two collaborative agents jointly, i. e., a multi-hop graph reasoner and a fact extractor.
no code implementations • ICLR 2019 • Meng Qu, Jian Tang, Yoshua Bengio
Therefore, in this paper we propose to study aligning knowledge graphs in fully-unsupervised or weakly-supervised fashion, i. e., without or with only a few aligned triplets.
2 code implementations • NeurIPS 2019 • Meng Qu, Jian Tang
In the E-step, a knowledge graph embedding model is used for inferring the missing triplets, while in the M-step, the weights of logic rules are updated based on both the observed and predicted triplets.
1 code implementation • NeurIPS 2019 • Fan-Yun Sun, Meng Qu, Jordan Hoffmann, Chin-wei Huang, Jian Tang
Experimental results on multiple real-world graphs show that vGraph is very effective in both community detection and node representation learning, outperforming many competitive baselines in both tasks.
1 code implementation • 15 May 2019 • Meng Qu, Yoshua Bengio, Jian Tang
Statistical relational learning methods can effectively model the dependency of object labels through conditional random fields for collective classification, whereas graph neural networks learn effective object representations for classification through end-to-end training.
2 code implementations • 11 Apr 2019 • Woojeong Jin, Meng Qu, Xisen Jin, Xiang Ren
The task becomes more challenging on temporal knowledge graphs, where each fact is associated with a timestamp.
1 code implementation • 2 Mar 2019 • Zhaocheng Zhu, Shizhen Xu, Meng Qu, Jian Tang
In this paper, we propose GraphVite, a high-performance CPU-GPU hybrid system for training node embeddings, by co-optimizing the algorithm and the system.
Ranked #1 on Node Classification on YouTube
1 code implementation • 20 Feb 2019 • Hongtao Lin, Jun Yan, Meng Qu, Xiang Ren
In this paper, we leverage a key insight that retrieving sentences expressing a relation is a dual task of predicting relation label for a given sentence---two tasks are complementary to each other and can be optimized jointly for mutual enhancement.
no code implementations • EMNLP 2018 • Zhao Zhang, Fuzhen Zhuang, Meng Qu, Fen Lin, Qing He
To this end, in this paper, we extend existing KGE models TransE, TransH and DistMult, to learn knowledge representations by leveraging the information from the HRS.
no code implementations • 9 Nov 2017 • Meng Qu, Xiang Ren, Yu Zhang, Jiawei Han
We propose a novel co-training framework with a distributional module and a pattern module.
1 code implementation • 19 Sep 2017 • Meng Qu, Jian Tang, Jingbo Shang, Xiang Ren, Ming Zhang, Jiawei Han
Existing approaches usually study networks with a single type of proximity between nodes, which defines a single view of a network.
1 code implementation • 25 Jun 2017 • Meng Qu, Xiang Ren, Jiawei Han
In this paper, we study the problem of automatic synonym discovery with knowledge bases, that is, identifying synonyms for knowledge base entities in a given domain-specific corpus.
no code implementations • 29 Nov 2016 • Jian Tang, Meng Qu, Qiaozhu Mei
Based on an identity-labeled text corpora, a heterogeneous network of words and word identities is constructed to model different-levels of word co-occurrences.
1 code implementation • 31 Oct 2016 • Jingbo Shang, Meng Qu, Jialu Liu, Lance M. Kaplan, Jiawei Han, Jian Peng
It models vertices as low-dimensional vectors to explore network structure-embedded similarity.
2 code implementations • 27 Oct 2016 • Xiang Ren, Zeqiu Wu, Wenqi He, Meng Qu, Clare R. Voss, Heng Ji, Tarek F. Abdelzaher, Jiawei Han
We propose a novel domain-independent framework, called CoType, that runs a data-driven text segmentation algorithm to extract entity mentions, and jointly embeds entity mentions, relation mentions, text features and type labels into two low-dimensional spaces (for entity and relation mentions respectively), where, in each space, objects whose types are close will also have similar representations.
Ranked #11 on Relation Extraction on NYT11-HRL
3 code implementations • 17 Feb 2016 • Xiang Ren, Wenqi He, Meng Qu, Clare R. Voss, Heng Ji, Jiawei Han
Current systems of fine-grained entity typing use distant supervision in conjunction with existing knowledge bases to assign categories (type labels) to entity mentions.
1 code implementation • 2 Aug 2015 • Jian Tang, Meng Qu, Qiaozhu Mei
One possible reason is that these text embedding methods learn the representation of text in a fully unsupervised way, without leveraging the labeled information available for the task.
8 code implementations • 12 Mar 2015 • Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, Qiaozhu Mei
This paper studies the problem of embedding very large information networks into low-dimensional vector spaces, which is useful in many tasks such as visualization, node classification, and link prediction.
Ranked #4 on Node Classification on Eximtradedata