1 code implementation • ICML 2020 • Sai Krishna Gottipati, Boris Sattarov, Sufeng. Niu, Hao-Ran Wei, Yashaswi Pathak, Shengchao Liu, Simon Blackburn, Karam Thomas, Connor Coley, Jian Tang, Sarath Chandar, Yoshua Bengio
In this work, we propose a novel reinforcement learning (RL) setup for drug discovery that addresses this challenge by embedding the concept of synthetic accessibility directly into the de novo compound design system.
no code implementations • NAACL (ALVR) 2021 • Zhifeng Li, Yu Hong, Yuchen Pan, Jian Tang, Jianmin Yao, Guodong Zhou
Besides of linguistic features in captions, MNMT allows visual(image) features to be used.
no code implementations • ICML 2020 • Meng Qu, Tianyu Gao, Louis-Pascal Xhonneux, Jian Tang
This paper studies few-shot relation extraction, which aims at predicting the relation for a pair of entities in a sentence by training with a few labeled examples in each relation.
1 code implementation • 28 May 2023 • Shengchao Liu, Weitao Du, ZhiMing Ma, Hongyu Guo, Jian Tang
Meanwhile, existing molecule multi-modal pretraining approaches approximate MI based on the representation space encoded from the topology and geometry, thus resulting in the loss of critical structural information of molecules.
no code implementations • 23 Mar 2023 • Mingze Wei, Yaomin Huang, Zhiyuan Xu, Ning Liu, Zhengping Che, Xinyu Zhang, Chaomin Shen, Feifei Feng, Chun Shan, Jian Tang
Our work significantly outperforms the state-of-the-art for three-finger robotic hands.
no code implementations • 23 Mar 2023 • Yaomin Huang, Ning Liu, Zhengping Che, Zhiyuan Xu, Chaomin Shen, Yaxin Peng, Guixu Zhang, Xinmei Liu, Feifei Feng, Jian Tang
CP$^3$ is elaborately designed to leverage the characteristics of point clouds and PNNs in order to enable 2D channel pruning methods for PNNs.
1 code implementation • 11 Mar 2023 • Zuobai Zhang, Minghao Xu, Vijil Chenthamarakshan, Aurélie Lozano, Payel Das, Jian Tang
Despite the ability to implicitly capture inter-residue contact information, transformer-based PLMs cannot encode protein structures explicitly for better structure-aware protein representations.
1 code implementation • 23 Feb 2023 • Fang Sun, Zhihao Zhan, Hongyu Guo, Ming Zhang, Jian Tang
In particular, GraphVF represents the first controllable geometry-aware, protein-specific molecule generation method, which can generate binding 3D molecules with tailored sub-structures and physio-chemical properties.
1 code implementation • 9 Feb 2023 • Shengchao Liu, Yutao Zhu, Jiarui Lu, Zhao Xu, Weili Nie, Anthony Gitter, Chaowei Xiao, Jian Tang, Hongyu Guo, Anima Anandkumar
Current AI-assisted protein design mainly utilizes protein sequential and structural information.
no code implementations • 28 Jan 2023 • Zuobai Zhang, Minghao Xu, Aurélie Lozano, Vijil Chenthamarakshan, Payel Das, Jian Tang
DiffPreT guides the encoder to recover the native protein sequences and structures from the perturbed ones along the multimodal diffusion trajectory, which acquires the joint distribution of sequences and structures.
no code implementations • 28 Jan 2023 • Minghao Xu, Xinyu Yuan, Santiago Miret, Jian Tang
On downstream tasks, ProtST enables both supervised learning and zero-shot prediction.
1 code implementation • 5 Jan 2023 • Yan Li, Xinjiang Lu, Haoyi Xiong, Jian Tang, Jiantao Su, Bo Jin, Dejing Dou
Long-term time-series forecasting (LTTF) has become a pressing demand in many applications, such as wind power supply planning.
no code implementations • CVPR 2023 • Yichen Zhu, Qiqi Zhou, Ning Liu, Zhiyuan Xu, Zhicai Ou, Xiaofeng Mou, Jian Tang
Unlike existing works that struggle to balance the trade-off between inference speed and SOD performance, in this paper, we propose a novel Scale-aware Knowledge Distillation (ScaleKD), which transfers knowledge of a complex teacher model to a compact student model.
no code implementations • CVPR 2023 • Yaomin Huang, Ning Liu, Zhengping Che, Zhiyuan Xu, Chaomin Shen, Yaxin Peng, Guixu Zhang, Xinmei Liu, Feifei Feng, Jian Tang
Directly implementing the 2D CNN channel pruning methods to PNNs undermine the performance of PNNs because of the different representations of 2D images and 3D point clouds as well as the network architecture disparity.
no code implementations • 21 Dec 2022 • Shengchao Liu, Weili Nie, Chengpeng Wang, Jiarui Lu, Zhuoran Qiao, Ling Liu, Jian Tang, Chaowei Xiao, Anima Anandkumar
There is increasing adoption of artificial intelligence in drug discovery.
1 code implementation • 6 Dec 2022 • Ryan-Rhys Griffiths, Leo Klarner, Henry B. Moss, Aditya Ravuri, Sang Truong, Samuel Stanton, Gary Tom, Bojana Rankovic, Yuanqi Du, Arian Jamasb, Aryan Deshwal, Julius Schwartz, Austin Tripp, Gregory Kell, Simon Frieder, Anthony Bourached, Alex Chan, Jacob Moss, Chengzhi Guo, Johannes Durholt, Saudamini Chaurasia, Felix Strieth-Kalthoff, Alpha A. Lee, Bingqing Cheng, Alán Aspuru-Guzik, Philippe Schwaller, Jian Tang
By defining such kernels in GAUCHE, we seek to open the door to powerful tools for uncertainty quantification and Bayesian optimisation in chemistry.
1 code implementation • 23 Nov 2022 • Minghao Xu, Yuanfan Guo, Yi Xu, Jian Tang, Xinlei Chen, Yuandong Tian
We study EurNets in two important domains for image and protein structure modeling.
no code implementations • 9 Nov 2022 • Shengchao Liu, David Vazquez, Jian Tang, Pierre-André Noël
We explore the downstream task performances for graph neural network (GNN) self-supervised learning (SSL) methods trained on subgraphs extracted from relational databases (RDBs).
2 code implementations • 26 Oct 2022 • Jianan Zhao, Meng Qu, Chaozhuo Li, Hao Yan, Qian Liu, Rui Li, Xing Xie, Jian Tang
In this paper, we propose an efficient and effective solution to learning on large text-attributed graphs by fusing graph structure and language learning with a variational Expectation-Maximization (EM) framework, called GLEM.
Ranked #1 on
Node Property Prediction
on ogbn-papers100M
no code implementations • 17 Oct 2022 • Chence Shi, Chuanrui Wang, Jiarui Lu, Bozitao Zhong, Jian Tang
Proteins are macromolecules that perform essential functions in all living organisms.
no code implementations • 13 Oct 2022 • Mikhail Galkin, Zhaocheng Zhu, Hongyu Ren, Jian Tang
Exploring the efficiency--effectiveness trade-off, we find the inductive relational structure representation method generally achieves higher performance, while the inductive node representation method is able to answer complex queries in the inference-only regime without any training on queries and scales to graphs of millions of nodes.
no code implementations • 12 Oct 2022 • Yangtian Zhang, Huiyu Cai, Chence Shi, Bozitao Zhong, Jian Tang
In silico prediction of the ligand binding pose to a given protein target is a crucial but challenging task in drug discovery.
1 code implementation • 30 Sep 2022 • Songtao Liu, Zhengkai Tu, Minkai Xu, Zuobai Zhang, Lu Lin, Rex Ying, Jian Tang, Peilin Zhao, Dinghao Wu
Current strategies use a decoupled approach of single-step retrosynthesis models and search algorithms, taking only the product as the input to predict the reactants for each planning step and ignoring valuable context information along the synthetic route.
1 code implementation • 28 Sep 2022 • Shaohua Fan, Xiao Wang, Yanhu Mo, Chuan Shi, Jian Tang
However, by presenting a graph classification investigation on the training graphs with severe bias, surprisingly, we discover that GNNs always tend to explore the spurious correlations to make decision, even if the causal correlation always exists.
1 code implementation • 24 Jul 2022 • Yaomin Huang, Xinmei Liu, Yichen Zhu, Zhiyuan Xu, Chaomin Shen, Zhengping Che, Guixu Zhang, Yaxin Peng, Feifei Feng, Jian Tang
Detecting 3D objects from point clouds is a practical yet challenging task that has attracted increasing attention recently.
no code implementations • 10 Jul 2022 • Kun Wu, Chengxiang Yin, Jian Tang, Zhiyuan Xu, Yanzhi Wang, Dejun Yang
In this paper, we define a new problem called continual few-shot learning, in which tasks arrive sequentially and each task is associated with a few training samples.
2 code implementations • 27 Jun 2022 • Shengchao Liu, Hongyu Guo, Jian Tang
Further by leveraging an SE(3)-invariant score matching method, we propose GeoSSL-DDM in which the coordinate denoising proxy task is effectively boiled down to denoising the pairwise atomic distances in a molecule.
no code implementations • 16 Jun 2022 • Hanchen Wang, Jean Kaddour, Shengchao Liu, Jian Tang, Matt Kusner, Joan Lasenby, Qi Liu
Graph Self-Supervised Learning (GSSL) paves the way for learning graph embeddings without expert annotation, which is particularly impactful for molecular graphs since the number of possible molecules is enormous and labels are expensive to obtain.
1 code implementation • 7 Jun 2022 • Zhaocheng Zhu, Xinyu Yuan, Mikhail Galkin, Sophie Xhonneux, Ming Zhang, Maxime Gazeau, Jian Tang
Experiments on both transductive and inductive knowledge graph reasoning benchmarks show that A*Net achieves competitive performance with existing state-of-the-art path-based methods, while merely visiting 10% nodes and 10% edges at each iteration.
Ranked #8 on
Link Property Prediction
on ogbl-wikikg2
1 code implementation • 5 Jun 2022 • Minghao Xu, Zuobai Zhang, Jiarui Lu, Zhaocheng Zhu, Yangtian Zhang, Chang Ma, Runcheng Liu, Jian Tang
However, there is a lack of a standard benchmark to evaluate the performance of different methods, which hinders the progress of deep learning in this field.
no code implementations • 1 Jun 2022 • Dingmin Wang, Shengchao Liu, Hanchen Wang, Bernardo Cuenca Grau, Linfeng Song, Jian Tang, Song Le, Qi Liu
In GraphRetrieval, similar training graphs associated with their ground-truth labels are retrieved as an enhancement to be jointly utilized with the input graph representation to complete various graph property predictive tasks.
1 code implementation • 26 May 2022 • Minghao Xu, Yuanfan Guo, Xuanyu Zhu, Jiawen Li, Zhenbang Sun, Jian Tang, Yi Xu, Bingbing Ni
This framework aims to learn multiple semantic representations for each image, and these representations are structured to encode image semantics from fine-grained to coarse-grained.
no code implementations • 24 May 2022 • Chenqing Hua, Guillaume Rabusseau, Jian Tang
Graph Neural Networks (GNNs) are attracting growing attention due to their effectiveness and flexibility in modeling a variety of graph-structured data.
no code implementations • 23 May 2022 • Kuangqi Zhou, Kaixin Wang, Jiashi Feng, Jian Tang, Tingyang Xu, Xinchao Wang
However, existing best deep AL methods are mostly developed for a single type of learning task (e. g., single-label classification), and hence may not perform well in molecular property prediction that involves various task types.
1 code implementation • ICML 2022 • Zhaocheng Zhu, Mikhail Galkin, Zuobai Zhang, Jian Tang
Answering complex first-order logic (FOL) queries on knowledge graphs is a fundamental task for multi-hop reasoning.
1 code implementation • ICLR 2022 • Meng Qu, Huiyu Cai, Jian Tang
This problem has been extensively studied with graph neural networks (GNNs) by learning effective node representations, as well as traditional structured prediction methods for modeling the structured output of node labels, e. g., conditional random fields (CRFs).
1 code implementation • 30 Mar 2022 • Sean Bin Yang, Chenjuan Guo, Jilin Hu, Bin Yang, Jian Tang, Christian S. Jensen
In this setting, it is essential to learn generic temporal path representations(TPRs) that consider spatial and temporal correlations simultaneously and that can be used in different applications, i. e., downstream tasks.
no code implementations • 26 Mar 2022 • Sha Yuan, Hanyu Zhao, Shuai Zhao, Jiahong Leng, Yangxiao Liang, Xiaozhi Wang, Jifan Yu, Xin Lv, Zhou Shao, Jiaao He, Yankai Lin, Xu Han, Zhenghao Liu, Ning Ding, Yongming Rao, Yizhao Gao, Liang Zhang, Ming Ding, Cong Fang, Yisen Wang, Mingsheng Long, Jing Zhang, Yinpeng Dong, Tianyu Pang, Peng Cui, Lingxiao Huang, Zheng Liang, HuaWei Shen, HUI ZHANG, Quanshi Zhang, Qingxiu Dong, Zhixing Tan, Mingxuan Wang, Shuo Wang, Long Zhou, Haoran Li, Junwei Bao, Yingwei Pan, Weinan Zhang, Zhou Yu, Rui Yan, Chence Shi, Minghao Xu, Zuobai Zhang, Guoqiang Wang, Xiang Pan, Mengjie Li, Xiaoyu Chu, Zijun Yao, Fangwei Zhu, Shulin Cao, Weicheng Xue, Zixuan Ma, Zhengyan Zhang, Shengding Hu, Yujia Qin, Chaojun Xiao, Zheni Zeng, Ganqu Cui, Weize Chen, Weilin Zhao, Yuan YAO, Peng Li, Wenzhao Zheng, Wenliang Zhao, Ziyi Wang, Borui Zhang, Nanyi Fei, Anwen Hu, Zenan Ling, Haoyang Li, Boxi Cao, Xianpei Han, Weidong Zhan, Baobao Chang, Hao Sun, Jiawen Deng, Chujie Zheng, Juanzi Li, Lei Hou, Xigang Cao, Jidong Zhai, Zhiyuan Liu, Maosong Sun, Jiwen Lu, Zhiwu Lu, Qin Jin, Ruihua Song, Ji-Rong Wen, Zhouchen Lin, LiWei Wang, Hang Su, Jun Zhu, Zhifang Sui, Jiajun Zhang, Yang Liu, Xiaodong He, Minlie Huang, Jian Tang, Jie Tang
With the rapid development of deep learning, training Big Models (BMs) for multiple downstream tasks becomes a popular paradigm.
no code implementations • CVPR 2022 • Haowen Wang, Mingyuan Wang, Zhengping Che, Zhiyuan Xu, XIUQUAN QIAO, Mengshi Qi, Feifei Feng, Jian Tang
In this paper, we design a novel two-branch end-to-end fusion network, which takes a pair of RGB and incomplete depth images as input to predict a dense and completed depth map.
1 code implementation • 11 Mar 2022 • Zuobai Zhang, Minghao Xu, Arian Jamasb, Vijil Chenthamarakshan, Aurelie Lozano, Payel Das, Jian Tang
Despite the effectiveness of sequence-based approaches, the power of pretraining on known protein structures, which are available in smaller numbers only, has not been explored for protein property prediction, though protein structures are known to be determinants of protein function.
1 code implementation • ICLR 2022 • Minkai Xu, Lantao Yu, Yang song, Chence Shi, Stefano Ermon, Jian Tang
GeoDiff treats each atom as a particle and learns to directly reverse the diffusion process (i. e., transforming from a noise distribution to stable conformations) as a Markov chain.
1 code implementation • ACL 2022 • Jing Zhang, Xiaokang Zhang, Jifan Yu, Jian Tang, Jie Tang, Cuiping Li, Hong Chen
Recent works on knowledge base question answering (KBQA) retrieve subgraphs for easier reasoning.
1 code implementation • 22 Feb 2022 • Shengchao Liu, Meng Qu, Zuobai Zhang, Huiyu Cai, Jian Tang
However, in contrast to other domains, the performance of multi-task learning in drug discovery is still not satisfying as the number of labeled data for each task is too limited, which calls for additional data to complement the data scarcity.
1 code implementation • 17 Feb 2022 • Yinuo Zhao, Kun Wu, Zhiyuan Xu, Zhengping Che, Qi Lu, Jian Tang, Chi Harold Liu
Vision-based autonomous urban driving in dense traffic is quite challenging due to the complicated urban environment and the dynamics of the driving behaviors.
1 code implementation • 16 Feb 2022 • Zhaocheng Zhu, Chence Shi, Zuobai Zhang, Shengchao Liu, Minghao Xu, Xinyu Yuan, Yangtian Zhang, Junkun Chen, Huiyu Cai, Jiarui Lu, Chang Ma, Runcheng Liu, Louis-Pascal Xhonneux, Meng Qu, Jian Tang
However, lacking domain knowledge (e. g., which tasks to work on), standard benchmarks and data preprocessing pipelines are the main obstacles for machine learning researchers to work in this domain.
1 code implementation • 28 Jan 2022 • Wujie Wang, Minkai Xu, Chen Cai, Benjamin Kurt Miller, Tess Smidt, Yusu Wang, Jian Tang, Rafael Gómez-Bombarelli
Coarse-graining (CG) of molecular simulations simplifies the particle representation by grouping selected atoms into pseudo-beads and drastically accelerates simulation.
no code implementations • 18 Dec 2021 • Rui Han, Qinglong Zhang, Chi Harold Liu, Guoren Wang, Jian Tang, Lydia Y. Chen
The prior art sheds light on exploring the accuracy-resource tradeoff by scaling the model sizes in accordance to resource dynamics.
1 code implementation • 13 Dec 2021 • Stephen Bonner, Ufuk Kirik, Ola Engkvist, Jian Tang, Ian P Barrett
We provide support for this observation across different datasets, models as well as predictive tasks.
no code implementations • 3 Dec 2021 • Yichen Zhu, Yuqin Zhu, Jie Du, Yi Wang, Zhicai Ou, Feifei Feng, Jian Tang
The TLA enables the ReViT to process the image with the minimum sufficient number of tokens during inference.
no code implementations • NeurIPS 2021 • Shitong Luo, Chence Shi, Minkai Xu, Jian Tang
However, these non-bonded atoms may be proximal to each other in 3D space, and modeling their interactions is of crucial importance to accurately determine molecular conformations, especially for large molecules and multi-molecular complexes.
no code implementations • 1 Dec 2021 • Yichen Zhu, Jie Du, Yuqin Zhu, Yi Wang, Zhicai Ou, Feifei Feng, Jian Tang
Critically, there is no effort to understand 1) why training BatchNorm only can find the perform-well architectures with the reduced supernet-training time, and 2) what is the difference between the train-BN-only supernet and the standard-train supernet.
no code implementations • NeurIPS 2021 • Minghao Xu, Meng Qu, Bingbing Ni, Jian Tang
We further propose an efficient and effective algorithm for inference based on mean-field variational inference, in which we first provide a warm initialization by independently predicting the objects and their relations according to the current model, followed by a few iterations of relational reasoning.
no code implementations • NeurIPS 2021 • Louis-Pascal A. C. Xhonneux, Andreea Deac, Petar Velickovic, Jian Tang
Due to the fundamental differences between algorithmic reasoning knowledge and feature extractors such as used in Computer Vision or NLP, we hypothesise that standard transfer techniques will not be sufficient to achieve systematic generalisation.
no code implementations • 21 Oct 2021 • Wenzheng Hu, Zhengping Che, Ning Liu, Mingyang Li, Jian Tang, ChangShui Zhang, Jianqiang Wang
Deep convolutional neural networks are shown to be overkill with high parametric and computational redundancy in many application scenarios, and an increasing number of works have explored model pruning to obtain lightweight and efficient networks.
no code implementations • NeurIPS 2021 • Andreea Deac, Petar Veličković, Ognjen Milinković, Pierre-Luc Bacon, Jian Tang, Mladen Nikolić
We find that prior approaches either assume that the environment is provided in such a tabular form -- which is highly restrictive -- or infer "local neighbourhoods" of states to run value iteration over -- for which we discover an algorithmic bottleneck effect.
1 code implementation • ICLR 2022 • Shengchao Liu, Hanchen Wang, Weiyang Liu, Joan Lasenby, Hongyu Guo, Jian Tang
However, the lack of 3D information in real-world scenarios has significantly impeded the learning of geometric graph representation.
no code implementations • NeurIPS Workshop AI4Scien 2021 • Shengchao Liu, Meng Qu, Zuobai Zhang, Huiyu Cai, Jian Tang
In this paper, we study multi-task learning for molecule property prediction in a different setting, where a relation graph between different tasks is available.
no code implementations • 23 Jul 2021 • Kun Wu, Chengxiang Yin, Zhengping Che, Bo Jiang, Jian Tang, Zheng Guan, Gangyi Ding
Deep generative models have made great progress in synthesizing images with arbitrary human poses and transferring poses of one person to others.
1 code implementation • 17 Jun 2021 • Sean Bin Yang, Chenjuan Guo, Jilin Hu, Jian Tang, Bin Yang
In the global view, PIM distinguishes the representations of the input paths from those of the negative paths.
1 code implementation • NeurIPS 2021 • Zhaocheng Zhu, Zuobai Zhang, Louis-Pascal Xhonneux, Jian Tang
To further improve the capacity of the path formulation, we propose the Neural Bellman-Ford Network (NBFNet), a general graph neural network framework that solves the path formulation with learned operators in the generalized Bellman-Ford algorithm.
no code implementations • 8 Jun 2021 • Hangrui Bi, Hengyi Wang, Chence Shi, Connor Coley, Jian Tang, Hongyu Guo
Reliably predicting the products of chemical reactions presents a fundamental challenge in synthetic chemistry.
1 code implementation • 8 Jun 2021 • Minghao Xu, Hang Wang, Bingbing Ni, Hongyu Guo, Jian Tang
This paper studies unsupervised/self-supervised whole-graph representation learning, which is critical in many tasks such as molecule properties prediction in drug and material discovery.
1 code implementation • 15 May 2021 • Minkai Xu, Wujie Wang, Shitong Luo, Chence Shi, Yoshua Bengio, Rafael Gomez-Bombarelli, Jian Tang
Specifically, the molecular graph is first encoded in a latent space, and then the 3D structures are generated by solving a principled bilevel optimization program.
6 code implementations • 9 May 2021 • Chence Shi, Shitong Luo, Minkai Xu, Jian Tang
We study a fundamental problem in computational chemistry known as molecular conformation generation, trying to predict stable 3D structures from 2D molecular graphs.
3 code implementations • ICLR 2021 • Minkai Xu, Shitong Luo, Yoshua Bengio, Jian Peng, Jian Tang
Inspired by the recent progress in deep generative models, in this paper, we propose a novel probabilistic framework to generate valid and diverse conformations given a molecular graph.
no code implementations • 19 Feb 2021 • Ning Liu, Geng Yuan, Zhengping Che, Xuan Shen, Xiaolong Ma, Qing Jin, Jian Ren, Jian Tang, Sijia Liu, Yanzhi Wang
In deep model compression, the recent finding "Lottery Ticket Hypothesis" (LTH) (Frankle & Carbin, 2018) pointed out that there could exist a winning ticket (i. e., a properly pruned sub-network together with original weight initialization) that can achieve competitive performance than the original dense network.
no code implementations • 1 Jan 2021 • Minghao Xu, Hang Wang, Bingbing Ni, Wenjun Zhang, Jian Tang
We propose to disentangle graph structure and node attributes into two distinct sets of representations, and such disentanglement can be done in either the input or the embedding space.
no code implementations • 1 Jan 2021 • Yewen Wang, Jian Tang, Yizhou Sun, Guy Wolf
We empirically analyse our proposed DGL-GNN model, and demonstrate its effectiveness and superior efficiency through a range of experiments.
no code implementations • ICCV 2021 • Chengxiang Yin, Kun Wu, Zhengping Che, Bo Jiang, Zhiyuan Xu, Jian Tang
Deep learning has made tremendous success in computer vision, natural language processing and even visual-semantic learning, which requires a huge amount of labeled training data.
no code implementations • 16 Dec 2020 • Hangrui Bi, Hengyi Wang, Chence Shi, Jian Tang
Our model achieves both an order of magnitude lower inference latency, with state-of-the-art top-1 accuracy and comparable performance on Top-K sampling.
no code implementations • 9 Dec 2020 • Thomas Gaudelet, Ben Day, Arian R. Jamasb, Jyothish Soman, Cristian Regep, Gertrude Liu, Jeremy B. R. Hayter, Richard Vickers, Charles Roberts, Jian Tang, David Roblin, Tom L. Blundell, Michael M. Bronstein, Jake P. Taylor-King
Graph Machine Learning (GML) is receiving growing interest within the pharmaceutical and biotechnology industries for its ability to model biomolecular structures, the functional relationships between them, and integrate multi-omic datasets - amongst other data types.
1 code implementation • 7 Dec 2020 • Minkai Xu, Zhiming Zhou, Guansong Lu, Jian Tang, Weinan Zhang, Yong Yu
Wasserstein GANs (WGANs), built upon the Kantorovich-Rubinstein (KR) duality of Wasserstein distance, is one of the most theoretically sound GAN models.
no code implementations • 5 Nov 2020 • Yue Shi, Bo Jiang, Zhengping Che, Jian Tang
In this work, we present a novel module, the Fluff block, to alleviate drawbacks of current multi-scale fusion methods and facilitate multi-scale object detection.
no code implementations • 5 Nov 2020 • Xuanzhao Wang, Zhengping Che, Bo Jiang, Ning Xiao, Ke Yang, Jian Tang, Jieping Ye, Jingyu Wang, Qi Qi
In this paper, we propose a novel and robust unsupervised video anomaly detection method by frame prediction with proper design which is more in line with the characteristics of surveillance videos.
no code implementations • 4 Nov 2020 • Yushuo Guan, Ning Liu, Pengyu Zhao, Zhengping Che, Kaigui Bian, Yanzhi Wang, Jian Tang
The convolutional neural network has achieved great success in fulfilling computer vision tasks despite large computation overhead against efficient deployment.
no code implementations • 30 Oct 2020 • Prateek Gupta, Tegan Maharaj, Martin Weiss, Nasim Rahaman, Hannah Alsdurf, Abhinav Sharma, Nanor Minoyan, Soren Harnois-Leblanc, Victor Schmidt, Pierre-Luc St. Charles, Tristan Deleu, Andrew Williams, Akshay Patel, Meng Qu, Olexa Bilaniuk, Gaétan Marceau Caron, Pierre Luc Carrier, Satya Ortiz-Gagné, Marc-Andre Rousseau, David Buckeridge, Joumana Ghosn, Yang Zhang, Bernhard Schölkopf, Jian Tang, Irina Rish, Christopher Pal, Joanna Merckx, Eilif B. Muller, Yoshua Bengio
The rapid global spread of COVID-19 has led to an unprecedented demand for effective methods to mitigate the spread of the disease, and various digital contact tracing (DCT) methods have emerged as a component of the solution.
no code implementations • 26 Oct 2020 • Luanxuan Hou, Jie Cao, Yuan Zhao, Haifeng Shen, Jian Tang, Ran He
We propose a refinement stage for the pyramid features to further boost the accuracy of our network.
no code implementations • 25 Oct 2020 • Andreea Deac, Petar Veličković, Ognjen Milinković, Pierre-Luc Bacon, Jian Tang, Mladen Nikolić
Value Iteration Networks (VINs) have emerged as a popular method to incorporate planning algorithms within deep reinforcement learning, enabling performance improvements on tasks requiring long-range reasoning and understanding of environment dynamics.
1 code implementation • NeurIPS 2020 • Wangchunshu Zhou, Jinyi Hu, HANLIN ZHANG, Xiaodan Liang, Maosong Sun, Chenyan Xiong, Jian Tang
In this paper, we develop a general framework for interpretable natural language understanding that requires only a small set of human annotated explanations for training.
no code implementations • 23 Oct 2020 • HANLIN ZHANG, Shuai Lin, Weiyang Liu, Pan Zhou, Jian Tang, Xiaodan Liang, Eric P. Xing
Recently, there has been increasing interest in the challenge of how to discriminatively vectorize graphs.
1 code implementation • ICLR 2021 • Yoshua Bengio, Prateek Gupta, Tegan Maharaj, Nasim Rahaman, Martin Weiss, Tristan Deleu, Eilif Muller, Meng Qu, Victor Schmidt, Pierre-Luc St-Charles, Hannah Alsdurf, Olexa Bilanuik, David Buckeridge, Gáetan Marceau Caron, Pierre-Luc Carrier, Joumana Ghosn, Satya Ortiz-Gagne, Chris Pal, Irina Rish, Bernhard Schölkopf, Abhinav Sharma, Jian Tang, Andrew Williams
Predictions are used to provide personalized recommendations to the individual via an app, as well as to send anonymized messages to the individual's contacts, who use this information to better predict their own infectiousness, an approach we call proactive contact tracing (PCT).
1 code implementation • NeurIPS 2020 • Zhiyuan Xu, Kun Wu, Zhengping Che, Jian Tang, Jieping Ye
While Deep Reinforcement Learning (DRL) has emerged as a promising approach to many complex tasks, it remains challenging to train a single DRL agent that is capable of undertaking multiple different continuous control tasks.
no code implementations • 8 Oct 2020 • Cheng Shen, Jianghua Ying, Le Liu, Jianpeng Liu, Na Li, Shuopei Wang, Jian Tang, Yanchong Zhao, Yanbang Chu, Kenji Watanabe, Takashi Taniguchi, Rong Yang, Dongxia Shi, Fanming Qu, Li Lu, Wei Yang, Guangyu Zhang
For {\theta}=1. 25{\deg}, we observe an emergence of topological insulating states at hole side with a sequence of Chern number |C|=4-|v|, where v is the number of electrons (holes) in moir\'e unite cell.
Mesoscale and Nanoscale Physics Materials Science
2 code implementations • ICLR 2021 • Meng Qu, Junkun Chen, Louis-Pascal Xhonneux, Yoshua Bengio, Jian Tang
Then in the E-step, we select a set of high-quality rules from all generated rules with both the rule generator and reasoning predictor via posterior inference; and in the M-step, the rule generator is updated with the rules selected in the E-step.
no code implementations • 26 Sep 2020 • Andreea Deac, Pierre-Luc Bacon, Jian Tang
Previously, such planning components have been incorporated through a neural network that partially aligns with the computational graph of value iteration.
no code implementations • ECCV 2020 • Yushuo Guan, Pengyu Zhao, Bingxuan Wang, Yuanxing Zhang, Cong Yao, Kaigui Bian, Jian Tang
To tackle with both the efficiency and the effectiveness of knowledge distillation, we introduce the feature aggregation to imitate the multi-teacher distillation in the single-teacher distillation framework by extracting informative supervision from multiple teacher feature maps.
no code implementations • 16 Jul 2020 • Simeon Spasov, Alessandro Di Stefano, Pietro Lio, Jian Tang
At each time step link generation is performed by first assigning node membership from a distribution over the communities, and then sampling a neighbor from a distribution over the nodes for the assigned community.
1 code implementation • 5 Jul 2020 • Meng Qu, Tianyu Gao, Louis-Pascal A. C. Xhonneux, Jian Tang
To more effectively generalize to new relations, in this paper we study the relationships between different relations and propose to leverage a global relation graph.
no code implementations • 26 Jun 2020 • Ivan Bacher, Hossein Javidnia, Soumyabrata Dev, Rahul Agrahari, Murhaf Hossari, Matthew Nicholson, Clare Conran, Jian Tang, Peng Song, David Corrigan, François Pitié
Over the past decade, the evolution of video-sharing platforms has attracted a significant amount of investments on contextual advertising.
1 code implementation • NeurIPS 2020 • Shengding Hu, Zheng Xiong, Meng Qu, Xingdi Yuan, Marc-Alexandre Côté, Zhiyuan Liu, Jian Tang
Graph neural networks (GNNs) have been attracting increasing popularity due to their simplicity and effectiveness in a variety of fields.
no code implementations • 18 May 2020 • Hannah Alsdurf, Edmond Belliveau, Yoshua Bengio, Tristan Deleu, Prateek Gupta, Daphne Ippolito, Richard Janda, Max Jarvie, Tyler Kolody, Sekoul Krastev, Tegan Maharaj, Robert Obryk, Dan Pilat, Valerie Pisano, Benjamin Prud'homme, Meng Qu, Nasim Rahaman, Irina Rish, Jean-Francois Rousseau, Abhinav Sharma, Brooke Struck, Jian Tang, Martin Weiss, Yun William Yu
Manual contact tracing of Covid-19 cases has significant challenges that limit the ability of public health authorities to minimize community infections.
1 code implementation • 14 May 2020 • Shuang Li, Chi Harold Liu, Qiuxia Lin, Binhui Xie, Zhengming Ding, Gao Huang, Jian Tang
Most existing deep DA models only focus on aligning feature representations of task-specific layers across domains while integrating a totally shared convolutional architecture for source and target.
no code implementations • 1 May 2020 • Mark Marsden, Kevin McGuinness, Joseph Antony, Haolin Wei, Milan Redzic, Jian Tang, Zhilan Hu, Alan Smeaton, Noel E. O'Connor
This work investigates the use of class-level difficulty factors in multi-label classification problems for the first time.
1 code implementation • 26 Apr 2020 • Sai Krishna Gottipati, Boris Sattarov, Sufeng. Niu, Yashaswi Pathak, Hao-Ran Wei, Shengchao Liu, Karam M. J. Thomas, Simon Blackburn, Connor W. Coley, Jian Tang, Sarath Chandar, Yoshua Bengio
Over the last decade, there has been significant progress in the field of machine learning for de novo drug design, particularly in deep generative models.
no code implementations • ICML 2020 • Chence Shi, Minkai Xu, Hongyu Guo, Ming Zhang, Jian Tang
A fundamental problem in computational chemistry is to find a set of reactants to synthesize a target molecule, a. k. a.
Ranked #13 on
Single-step retrosynthesis
on USPTO-50k
1 code implementation • NeurIPS 2020 • Ashutosh Adhikari, Xingdi Yuan, Marc-Alexandre Côté, Mikuláš Zelinka, Marc-Antoine Rondeau, Romain Laroche, Pascal Poupart, Jian Tang, Adam Trischler, William L. Hamilton
Playing text-based games requires skills in processing natural language and sequential decision making.
1 code implementation • ICLR 2020 • Chence Shi, Minkai Xu, Zhaocheng Zhu, Wei-Nan Zhang, Ming Zhang, Jian Tang
Molecular graph generation is a fundamental problem for drug discovery and has been attracting growing attention.
Ranked #1 on
Molecular Graph Generation
on MOSES
no code implementations • 23 Jan 2020 • Xiaolong Ma, Zhengang Li, Yifan Gong, Tianyun Zhang, Wei Niu, Zheng Zhan, Pu Zhao, Jian Tang, Xue Lin, Bin Ren, Yanzhi Wang
Accelerating DNN execution on various resource-limited computing platforms has been a long-standing problem.
no code implementations • ECCV 2020 • Xiaolong Ma, Wei Niu, Tianyun Zhang, Sijia Liu, Sheng Lin, Hongjia Li, Xiang Chen, Jian Tang, Kaisheng Ma, Bin Ren, Yanzhi Wang
Weight pruning has been widely acknowledged as a straightforward and effective method to eliminate redundancy in Deep Neural Networks (DNN), thereby achieving acceleration on various platforms.
1 code implementation • ICML 2020 • Louis-Pascal A. C. Xhonneux, Meng Qu, Jian Tang
The key idea is how to characterise the continuous dynamics of node representations, i. e. the derivatives of node representations, w. r. t.
1 code implementation • 13 Nov 2019 • Xiaozhi Wang, Tianyu Gao, Zhaocheng Zhu, Zhengyan Zhang, Zhiyuan Liu, Juanzi Li, Jian Tang
Pre-trained language representation models (PLMs) cannot well capture factual knowledge from text.
1 code implementation • 8 Nov 2019 • Carlos Lassance, Myriam Bontonou, Ghouthi Boukli Hacene, Vincent Gripon, Jian Tang, Antonio Ortega
Specifically we introduce a graph-based RKD method, in which graphs are used to capture the geometry of latent spaces.
no code implementations • 8 Oct 2019 • Soumyabrata Dev, Hossein Javidnia, Murhaf Hossari, Matthew Nicholson, Killian McCabe, Atul Nautiyal, Clare Conran, Jian Tang, Wei Xu, François Pitié
Virtual advertising is an important and promising feature in the area of online advertising.
no code implementations • 25 Sep 2019 • Andreea Deac, Yu-Hsiang Huang, Petar Velickovic, Pietro Lio, Jian Tang
Through many recent advances in graph representation learning, performance achieved on tasks involving graph-structured data has substantially increased in recent years---mostly on tasks involving node-level predictions.
no code implementations • 25 Sep 2019 • Shengding Hu, Meng Qu, Zhiyuan Liu, Jian Tang
Moreover, we also studied how to learn a universal policy for labeling nodes on graphs with multiple training graphs and then transfer the learned policy to unseen graphs.
1 code implementation • 25 Sep 2019 • Vikas Verma, Meng Qu, Kenji Kawaguchi, Alex Lamb, Yoshua Bengio, Juho Kannala, Jian Tang
We present GraphMix, a regularization method for Graph Neural Network based semi-supervised object classification, whereby we propose to train a fully-connected network jointly with the graph neural network via parameter sharing and interpolation-based regularization.
Ranked #1 on
Node Classification
on Bitcoin-Alpha
no code implementations • 25 Sep 2019 • Vikas Verma, Meng Qu, Alex Lamb, Yoshua Bengio, Juho Kannala, Jian Tang
We present GraphMix, a regularization technique for Graph Neural Network based semi-supervised object classification, leveraging the recent advances in the regularization of classical deep neural networks.
no code implementations • 11 Sep 2019 • Carlos Lassance, Vincent Gripon, Jian Tang, Antonio Ortega
Deep Networks have been shown to provide state-of-the-art performance in many machine learning challenges.
no code implementations • 6 Sep 2019 • Xiaolong Ma, Fu-Ming Guo, Wei Niu, Xue Lin, Jian Tang, Kaisheng Ma, Bin Ren, Yanzhi Wang
Model compression techniques on Deep Neural Network (DNN) have been widely acknowledged as an effective way to achieve acceleration on a variety of platforms, and DNN weight pruning is a straightforward and effective method.
1 code implementation • 3 Sep 2019 • Jordan Hoffmann, Louis Maestrati, Yoshihide Sawada, Jian Tang, Jean Michel Sellier, Yoshua Bengio
We present a method to encode and decode the position of atoms in 3-D molecules from a dataset of nearly 50, 000 stable crystal unit cells that vary from containing 1 to over 100 atoms.
1 code implementation • 12 Aug 2019 • Yanru Qu, Ting Bai, Wei-Nan Zhang, Jian-Yun Nie, Jian Tang
This paper studies graph-based recommendation, where an interaction graph is constructed from historical records and is lever-aged to alleviate data sparsity and cold start problems.
Ranked #1 on
Click-Through Rate Prediction
on MovieLens 1M
4 code implementations • ICLR 2020 • Fan-Yun Sun, Jordan Hoffmann, Vikas Verma, Jian Tang
There are also some recent methods based on language models (e. g. graph2vec) but they tend to only consider certain substructures (e. g. subtrees) as graph representatives.
Ranked #21 on
Graph Classification
on IMDb-B
no code implementations • ICLR 2019 • Meng Qu, Jian Tang, Yoshua Bengio
Therefore, in this paper we propose to study aligning knowledge graphs in fully-unsupervised or weakly-supervised fashion, i. e., without or with only a few aligned triplets.
no code implementations • 6 Jul 2019 • Ning Liu, Xiaolong Ma, Zhiyuan Xu, Yanzhi Wang, Jian Tang, Jieping Ye
This work proposes AutoCompress, an automatic structured pruning framework with the following key performance improvements: (i) effectively incorporate the combination of structured pruning schemes in the automatic process; (ii) adopt the state-of-art ADMM-based structured weight pruning as the core algorithm, and propose an innovative additional purification step for further weight reduction without accuracy loss; and (iii) develop effective heuristic search method enhanced by experience-based guided search, replacing the prior deep reinforcement learning technique which has underlying incompatibility with the target pruning problem.
2 code implementations • 22 Jun 2019 • Weiping Song, Zhijian Duan, Ziqing Yang, Hao Zhu, Ming Zhang, Jian Tang
Recently, a variety of methods have been developed for this problem, which generally try to learn effective representations of users and items and then match items to users according to their representations.
Ranked #1 on
Recommendation Systems
on Last.FM
2 code implementations • NeurIPS 2019 • Meng Qu, Jian Tang
In the E-step, a knowledge graph embedding model is used for inferring the missing triplets, while in the M-step, the weights of logic rules are updated based on both the observed and predicted triplets.
1 code implementation • NeurIPS 2019 • Fan-Yun Sun, Meng Qu, Jordan Hoffmann, Chin-wei Huang, Jian Tang
Experimental results on multiple real-world graphs show that vGraph is very effective in both community detection and node representation learning, outperforming many competitive baselines in both tasks.
1 code implementation • 11 Jun 2019 • Shagun Sodhani, Anirudh Goyal, Tristan Deleu, Yoshua Bengio, Sergey Levine, Jian Tang
There is enough evidence that humans build a model of the environment, not only by observing the environment but also by interacting with the environment.
no code implementations • 19 May 2019 • Zhiqing Sun, Jian Tang, Pan Du, Zhi-Hong Deng, Jian-Yun Nie
Furthermore, we propose a diversified point network to generate a set of diverse keyphrases out of the word graph in the decoding process.
1 code implementation • 15 May 2019 • Meng Qu, Yoshua Bengio, Jian Tang
Statistical relational learning methods can effectively model the dependency of object labels through conditional random fields for collective classification, whereas graph neural networks learn effective object representations for classification through end-to-end training.
no code implementations • 6 May 2019 • Soumyabrata Dev, Murhaf Hossari, Matthew Nicholson, Killian McCabe, Atul Nautiyal, Clare Conran, Jian Tang, Wei Xu, François Pitié
Such techniques involve replacing an existing advertisement in a video frame, with a new advertisement.
1 code implementation • 2 May 2019 • Andreea Deac, Yu-Hsiang Huang, Petar Veličković, Pietro Liò, Jian Tang
Complex or co-existing diseases are commonly treated using drug combinations, which can lead to higher risk of adverse side effects.
no code implementations • 1 May 2019 • Myriam Bontonou, Carlos Lassance, Ghouthi Boukli Hacene, Vincent Gripon, Jian Tang, Antonio Ortega
We introduce a novel loss function for training deep learning architectures to perform classification.
no code implementations • 28 Apr 2019 • Jacek Czaja, Michal Gallus, Tomasz Patejko, Jian Tang
Softmax is popular normalization method used in machine learning.
no code implementations • 16 Apr 2019 • Soumyabrata Dev, Murhaf Hossari, Matthew Nicholson, Killian McCabe, Atul Nautiyal, Clare Conran, Jian Tang, Wei Xu, François Pitié
The rapid increase in the number of online videos provides the marketing and advertising agents ample opportunities to reach out to their audience.
no code implementations • CVPR 2019 • Tongtong Yuan, Weihong Deng, Jian Tang, Yinan Tang, Binghui Chen
In this paper, different from the approaches on learning the loss structures, we propose a robust SNR distance metric based on Signal-to-Noise Ratio (SNR) for measuring the similarity of image pairs for deep metric learning.
2 code implementations • 23 Mar 2019 • Shaokai Ye, Xiaoyu Feng, Tianyun Zhang, Xiaolong Ma, Sheng Lin, Zhengang Li, Kaidi Xu, Wujie Wen, Sijia Liu, Jian Tang, Makan Fardad, Xue Lin, Yongpan Liu, Yanzhi Wang
A recent work developed a systematic frame-work of DNN weight pruning using the advanced optimization technique ADMM (Alternating Direction Methods of Multipliers), achieving one of state-of-art in weight pruning results.
no code implementations • 21 Mar 2019 • Soumyabrata Dev, Murhaf Hossari, Matthew Nicholson, Killian McCabe, Atul Nautiyal, Clare Conran, Jian Tang, Wei Xu, François Pitié
With the advent of faster internet services and growth of multimedia content, we observe a massive growth in the number of online videos.
no code implementations • 20 Mar 2019 • Lu-chen Liu, Haoran Li, Zhiting Hu, Haoran Shi, Zichang Wang, Jian Tang, Ming Zhang
Our model learns hierarchical representationsof event sequences, to adaptively distinguish between short-range and long-range events, and accurately capture coretemporal dependencies.
1 code implementation • 2 Mar 2019 • Zhaocheng Zhu, Shizhen Xu, Meng Qu, Jian Tang
In this paper, we propose GraphVite, a high-performance CPU-GPU hybrid system for training node embeddings, by co-optimizing the algorithm and the system.
Ranked #1 on
Node Classification
on YouTube
6 code implementations • ICLR 2019 • Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, Jian Tang
We study the problem of learning representations of entities and relations in knowledge graphs for predicting missing links.
Ranked #2 on
Link Prediction
on FB122
2 code implementations • 25 Feb 2019 • Weiping Song, Zhiping Xiao, Yifan Wang, Laurent Charlin, Ming Zhang, Jian Tang
However, recommendation in online communities is a challenging problem: 1) users' interests are dynamic, and 2) users are influenced by their friends.
Ranked #1 on
Recommendation Systems
on Douban
(NDCG metric)
no code implementations • 21 Nov 2018 • Pengfei Liu, Shuaichen Chang, Xuanjing Huang, Jian Tang, Jackie Chi Kit Cheung
Recently, a large number of neural mechanisms and models have been proposed for sequence learning, of which self-attention, as exemplified by the Transformer model, and graph neural networks (GNNs) have attracted much attention.
no code implementations • 9 Nov 2018 • Murhaf Hossari, Soumyabrata Dev, Matthew Nicholson, Killian McCabe, Atul Nautiyal, Clare Conran, Jian Tang, Wei Xu, François Pitié
Online video advertising gives content providers the ability to deliver compelling content, reach a growing audience, and generate additional revenue from online media.
no code implementations • 30 Oct 2018 • Mingjie Sun, Jian Tang, Huichen Li, Bo Li, Chaowei Xiao, Yao Chen, Dawn Song
In this paper, we take the task of link prediction as an example, which is one of the most fundamental problems for graph analysis, and introduce a data positioning attack to node embedding methods.
13 code implementations • 29 Oct 2018 • Weiping Song, Chence Shi, Zhiping Xiao, Zhijian Duan, Yewen Xu, Ming Zhang, Jian Tang
Afterwards, a multi-head self-attentive neural network with residual connections is proposed to explicitly model the feature interactions in the low-dimensional space.
Ranked #1 on
Click-Through Rate Prediction
on KDD12
no code implementations • ICLR 2019 • Shaokai Ye, Tianyun Zhang, Kaiqi Zhang, Jiayu Li, Kaidi Xu, Yunfei Yang, Fuxun Yu, Jian Tang, Makan Fardad, Sijia Liu, Xiang Chen, Xue Lin, Yanzhi Wang
Motivated by dynamic programming, the proposed method reaches extremely high pruning rate by using partial prunings with moderate pruning rates.
no code implementations • 27 Sep 2018 • Shagun Sodhani, Anirudh Goyal, Tristan Deleu, Yoshua Bengio, Jian Tang
Analogously, we would expect such interaction to be helpful for a learning agent while learning to model the environment dynamics.
1 code implementation • 29 Jul 2018 • Tianyun Zhang, Shaokai Ye, Kaiqi Zhang, Xiaolong Ma, Ning Liu, Linfeng Zhang, Jian Tang, Kaisheng Ma, Xue Lin, Makan Fardad, Yanzhi Wang
Without loss of accuracy on the AlexNet model, we achieve 2. 58X and 3. 65X average measured speedup on two GPUs, clearly outperforming the prior work.
1 code implementation • 15 Jul 2018 • Jiezhong Qiu, Jian Tang, Hao Ma, Yuxiao Dong, Kuansan Wang, Jie Tang
Inspired by the recent success of deep neural networks in a wide range of computing applications, we design an end-to-end framework, DeepInf, to learn users' latent feature representation for predicting social influence.
no code implementations • 8 Jun 2018 • Chengxiang Yin, Jian Tang, Zhiyuan Xu, Yanzhi Wang
Meta-learning enables a model to learn from very limited data to undertake a new task.
3 code implementations • ECCV 2018 • Tianyun Zhang, Shaokai Ye, Kaiqi Zhang, Jian Tang, Wujie Wen, Makan Fardad, Yanzhi Wang
We first formulate the weight pruning problem of DNNs as a nonconvex optimization problem with combinatorial constraints specifying the sparsity requirements, and then adopt the ADMM framework for systematic weight pruning.
no code implementations • 14 Mar 2018 • Yanzhi Wang, Zheng Zhan, Jiayu Li, Jian Tang, Bo Yuan, Liang Zhao, Wujie Wen, Siyue Wang, Xue Lin
Based on the universal approximation property, we further prove that SCNNs and BNNs exhibit the same energy complexity.
1 code implementation • 13 Mar 2018 • Lu-chen Liu, Jianhao Shen, Ming Zhang, Zichang Wang, Jian Tang
One important application is clinical endpoint prediction, which aims to predict whether a disease, a symptom or an abnormal lab test will happen in the future according to patients' history records.
no code implementations • 2 Mar 2018 • Teng Li, Zhiyuan Xu, Jian Tang, Yanzhi Wang
Specifically, we, for the first time, propose to leverage emerging Deep Reinforcement Learning (DRL) for enabling model-free control in DSDPSs; and present design, implementation and evaluation of a novel and highly effective DRL-based control framework, which minimizes average end-to-end tuple processing time by jointly learning the system environment via collecting very limited runtime statistics data and making decisions under the guidance of powerful Deep Neural Networks.
no code implementations • 18 Feb 2018 • Yanzhi Wang, Caiwen Ding, Zhe Li, Geng Yuan, Siyu Liao, Xiaolong Ma, Bo Yuan, Xuehai Qian, Jian Tang, Qinru Qiu, Xue Lin
Hardware accelerations of deep learning systems have been extensively investigated in industry and academia.
no code implementations • 28 Jan 2018 • Ning Liu, Ying Liu, Brent Logan, Zhiyuan Xu, Jian Tang, Yanzhi Wang
This paper presents the first deep reinforcement learning (DRL) framework to estimate the optimal Dynamic Treatment Regimes from observational medical data.
no code implementations • 17 Jan 2018 • Zhiyuan Xu, Jian Tang, Jingsong Meng, Weiyi Zhang, Yanzhi Wang, Chi Harold Liu, Dejun Yang
Modern communication networks have become very complicated and highly dynamic, which makes them hard to model, predict and control.
1 code implementation • 9 Jan 2018 • Hui Wu, Matrix Yao, Albert Hu, Gaofeng Sun, Xiaokun Yu, Jian Tang
Lung nodule proposals generation is the primary step of lung nodule detection and has received much attention in recent years .
no code implementations • 21 Nov 2017 • Quanyu Dai, Qiang Li, Jian Tang, Dan Wang
Learning low-dimensional representations of networks has proved effective in a variety of tasks such as node classification, link prediction and network visualization.
1 code implementation • 19 Sep 2017 • Meng Qu, Jian Tang, Jingbo Shang, Xiang Ren, Ming Zhang, Jiawei Han
Existing approaches usually study networks with a single type of proximity between nodes, which defines a single view of a network.
no code implementations • 30 Aug 2017 • Jian Tang, Yue Wang, Kai Zheng, Qiaozhu Mei
A novel deep memory network is proposed to automatically find relevant information from a collection of longer documents and reformulate the short text through a gating mechanism.
no code implementations • 29 Aug 2017 • Caiwen Ding, Siyu Liao, Yanzhi Wang, Zhe Li, Ning Liu, Youwei Zhuo, Chao Wang, Xuehai Qian, Yu Bai, Geng Yuan, Xiaolong Ma, Yi-Peng Zhang, Jian Tang, Qinru Qiu, Xue Lin, Bo Yuan
As the size of DNNs continues to grow, it is critical to improve the energy efficiency and performance while maintaining accuracy.
no code implementations • 13 Mar 2017 • Ning Liu, Zhe Li, Zhiyuan Xu, Jielong Xu, Sheng Lin, Qinru Qiu, Jian Tang, Yanzhi Wang
Automatic decision-making approaches, such as reinforcement learning (RL), have been applied to (partially) solve the resource allocation problem adaptively in the cloud computing system.