1 code implementation • ACL 2022 • Daniel Zhang-li, Jing Zhang, Jifan Yu, Xiaokang Zhang, Peng Zhang, Jie Tang, Juanzi Li
We investigate the usage of entity linking (EL)in downstream tasks and present the first modularized EL toolkit for easy task adaptation.
no code implementations • ACL 2022 • Xiao Liu, Kaixuan Ji, Yicheng Fu, Weng Tam, Zhengxiao Du, Zhilin Yang, Jie Tang
Prompt tuning, which only tunes continuous prompts with a frozen language model, substantially reduces per-task storage and memory usage at training.
1 code implementation • 28 Feb 2023 • Jing Zhang, Xiaokang Zhang, Daniel Zhang-li, Jifan Yu, Zijun Yao, Zeyao Ma, Yiqi Xu, Haohua Wang, Xiaohan Zhang, Nianyi Lin, Sunrui Lu, Juanzi Li, Jie Tang
We present GLM-Dialog, a large-scale language model (LLM) with 10B parameters capable of knowledge-grounded conversation in Chinese using a search engine to access the Internet knowledge.
1 code implementation • 23 Feb 2023 • Bo Chen, Jing Zhang, Fanjin Zhang, Tianyi Han, Yuqing Cheng, Xiaoyan Li, Yuxiao Dong, Jie Tang
Name disambiguation -- a fundamental problem in online academic systems -- is now facing greater challenges with the increasing growth of research papers.
no code implementations • 31 Jan 2023 • Jacob Hilton, Jie Tang, John Schulman
Recent work has shown that, in generative modeling, cross-entropy loss improves smoothly with model size and training compute, following a power law plus constant scaling law.
1 code implementation • 30 Nov 2022 • Jie Liu, Chao Chen, Jie Tang, Gangshan Wu
In the fine area, we use an Intra-Patch Self-Attention (IPSA) module to model long-range pixel dependencies in a local patch, and then a $3\times3$ convolution is applied to process the finest details.
no code implementations • 21 Nov 2022 • Tim Tianyi Yang, Tom Tianze Yang, Andrew Liu, Jie Tang, Na An, Shaoshan Liu, Xue Liu
Also, through the AICOM-MP project, we have generalized a methodology of developing health AI technologies for AMCs to allow universal access even in resource-constrained environments.
1 code implementation • 30 Oct 2022 • Zhuoyi Yang, Ming Ding, Yanhui Guo, Qingsong Lv, Jie Tang
In this paper, we find that parameter-efficient tuning makes a good classification head, with which we can simply replace the randomly initialized heads for a stable performance gain.
2 code implementations • 5 Oct 2022 • Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, Weng Lam Tam, Zixuan Ma, Yufei Xue, Jidong Zhai, WenGuang Chen, Peng Zhang, Yuxiao Dong, Jie Tang
We introduce GLM-130B, a bilingual (English and Chinese) pre-trained language model with 130 billion parameters.
Ranked #1 on
Language Modelling
on CLUE (CMRC2018)
1 code implementation • 16 Aug 2022 • Xiao Liu, Shiyu Zhao, Kai Su, Yukuo Cen, Jiezhong Qiu, Mengdi Zhang, Wei Wu, Yuxiao Dong, Jie Tang
In this work, we present the Knowledge Graph Transformer (kgTransformer) with masked pre-training and fine-tuning strategies.
no code implementations • 18 Jul 2022 • Qingyang Zhong, Jifan Yu, Zheyuan Zhang, Yiming Mao, Yuquan Wang, Yankai Lin, Lei Hou, Juanzi Li, Jie Tang
Adaptive learning aims to stimulate and meet the needs of individual learners, which requires sophisticated system-level coordination of diverse tasks, including modeling learning resources, estimating student states, and making personalized recommendations.
2 code implementations • 14 Jul 2022 • Weng Lam Tam, Xiao Liu, Kaixuan Ji, Lilong Xue, Xingjian Zhang, Yuxiao Dong, Jiahua Liu, Maodi Hu, Jie Tang
By updating only 0. 1% of the model parameters, the prompt tuning strategy can help retrieval models achieve better generalization performance than traditional methods in which all parameters are updated.
1 code implementation • 23 Jun 2022 • Bowen Baker, Ilge Akkaya, Peter Zhokhov, Joost Huizinga, Jie Tang, Adrien Ecoffet, Brandon Houghton, Raul Sampedro, Jeff Clune
Pretraining on noisy, internet-scale datasets has been heavily studied as a technique for training models with broad, general capabilities for text, images, and other modalities.
1 code implementation • 22 Jun 2022 • Xiaoxuan Liu, Lianmin Zheng, Dequan Wang, Yukuo Cen, Weize Chen, Xu Han, Jianfei Chen, Zhiyuan Liu, Jie Tang, Joey Gonzalez, Michael Mahoney, Alvin Cheung
Training large neural network (NN) models requires extensive memory resources, and Activation Compressed Training (ACT) is a promising approach to reduce training memory footprint.
1 code implementation • 17 Jun 2022 • Rui He, Yuanxi Sun, Youzeng Li, Zuwei Huang, Feng Hu, Xu Cheng, Jie Tang
In this paper, we apply Masked Autoencoders to improve algorithm performance on the GEBD tasks.
1 code implementation • 29 May 2022 • Wenyi Hong, Ming Ding, Wendi Zheng, Xinghan Liu, Jie Tang
Large-scale pretrained transformers have created milestones in text (GPT-3) and text-to-image (DALL-E and CogView) generation.
Ranked #7 on
Video Generation
on UCF-101
1 code implementation • 28 May 2022 • Ziang Li, Ming Ding, Weikai Li, Zihan Wang, Ziyu Zeng, Yukuo Cen, Jie Tang
graph benchmark (IGB) consisting of 4 datasets.
2 code implementations • 22 May 2022 • Zhenyu Hou, Xiao Liu, Yukuo Cen, Yuxiao Dong, Hongxia Yang, Chunjie Wang, Jie Tang
Despite this, contrastive learning-which heavily relies on structural data augmentation and complicated training strategies-has been the dominant approach in graph SSL, while the progress of generative SSL on graphs, especially graph autoencoders (GAEs), has thus far not reached the potential as promised in other fields.
Ranked #1 on
Node Classification
on Cora: fixed 20 node per class
1 code implementation • Findings (ACL) 2022 • Chenguang Wang, Xiao Liu, Zui Chen, Haoyun Hong, Jie Tang, Dawn Song
We introduce a method for improving the structural understanding abilities of language models.
Ranked #1 on
Relation Extraction
on TACRED
2 code implementations • 11 May 2022 • Yawei Li, Kai Zhang, Radu Timofte, Luc van Gool, Fangyuan Kong, Mingxi Li, Songwei Liu, Zongcai Du, Ding Liu, Chenhui Zhou, Jingyi Chen, Qingrui Han, Zheyuan Li, Yingqi Liu, Xiangyu Chen, Haoming Cai, Yu Qiao, Chao Dong, Long Sun, Jinshan Pan, Yi Zhu, Zhikai Zong, Xiaoxiao Liu, Zheng Hui, Tao Yang, Peiran Ren, Xuansong Xie, Xian-Sheng Hua, Yanbo Wang, Xiaozhong Ji, Chuming Lin, Donghao Luo, Ying Tai, Chengjie Wang, Zhizhong Zhang, Yuan Xie, Shen Cheng, Ziwei Luo, Lei Yu, Zhihong Wen, Qi Wu1, Youwei Li, Haoqiang Fan, Jian Sun, Shuaicheng Liu, Yuanfei Huang, Meiguang Jin, Hua Huang, Jing Liu, Xinjian Zhang, Yan Wang, Lingshun Long, Gen Li, Yuanfan Zhang, Zuowei Cao, Lei Sun, Panaetov Alexander, Yucong Wang, Minjie Cai, Li Wang, Lu Tian, Zheyuan Wang, Hongbing Ma, Jie Liu, Chao Chen, Yidong Cai, Jie Tang, Gangshan Wu, Weiran Wang, Shirui Huang, Honglei Lu, Huan Liu, Keyan Wang, Jun Chen, Shi Chen, Yuchun Miao, Zimo Huang, Lefei Zhang, Mustafa Ayazoğlu, Wei Xiong, Chengyi Xiong, Fei Wang, Hao Li, Ruimian Wen, Zhijing Yang, Wenbin Zou, Weixin Zheng, Tian Ye, Yuncheng Zhang, Xiangzhen Kong, Aditya Arora, Syed Waqas Zamir, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Dandan Gaoand Dengwen Zhouand Qian Ning, Jingzhu Tang, Han Huang, YuFei Wang, Zhangheng Peng, Haobo Li, Wenxue Guan, Shenghua Gong, Xin Li, Jun Liu, Wanjun Wang, Dengwen Zhou, Kun Zeng, Hanjiang Lin, Xinyu Chen, Jinsheng Fang
The aim was to design a network for single image super-resolution that achieved improvement of efficiency measured according to several metrics including runtime, parameters, FLOPs, activations, and memory consumption while at least maintaining the PSNR of 29. 00dB on DIV2K validation set.
1 code implementation • 28 Apr 2022 • Ming Ding, Wendi Zheng, Wenyi Hong, Jie Tang
The development of the transformer-based text-to-image models are impeded by its slow generation and complexity for high-resolution images.
1 code implementation • 18 Apr 2022 • Zongcai Du, Ding Liu, Jie Liu, Jie Tang, Gangshan Wu, Lean Fu
Besides, FMEN-S achieves the lowest memory consumption and the second shortest runtime in NTIRE 2022 challenge on efficient super-resolution.
no code implementations • 26 Mar 2022 • Sha Yuan, Hanyu Zhao, Shuai Zhao, Jiahong Leng, Yangxiao Liang, Xiaozhi Wang, Jifan Yu, Xin Lv, Zhou Shao, Jiaao He, Yankai Lin, Xu Han, Zhenghao Liu, Ning Ding, Yongming Rao, Yizhao Gao, Liang Zhang, Ming Ding, Cong Fang, Yisen Wang, Mingsheng Long, Jing Zhang, Yinpeng Dong, Tianyu Pang, Peng Cui, Lingxiao Huang, Zheng Liang, HuaWei Shen, HUI ZHANG, Quanshi Zhang, Qingxiu Dong, Zhixing Tan, Mingxuan Wang, Shuo Wang, Long Zhou, Haoran Li, Junwei Bao, Yingwei Pan, Weinan Zhang, Zhou Yu, Rui Yan, Chence Shi, Minghao Xu, Zuobai Zhang, Guoqiang Wang, Xiang Pan, Mengjie Li, Xiaoyu Chu, Zijun Yao, Fangwei Zhu, Shulin Cao, Weicheng Xue, Zixuan Ma, Zhengyan Zhang, Shengding Hu, Yujia Qin, Chaojun Xiao, Zheni Zeng, Ganqu Cui, Weize Chen, Weilin Zhao, Yuan YAO, Peng Li, Wenzhao Zheng, Wenliang Zhao, Ziyi Wang, Borui Zhang, Nanyi Fei, Anwen Hu, Zenan Ling, Haoyang Li, Boxi Cao, Xianpei Han, Weidong Zhan, Baobao Chang, Hao Sun, Jiawen Deng, Chujie Zheng, Juanzi Li, Lei Hou, Xigang Cao, Jidong Zhai, Zhiyuan Liu, Maosong Sun, Jiwen Lu, Zhiwu Lu, Qin Jin, Ruihua Song, Ji-Rong Wen, Zhouchen Lin, LiWei Wang, Hang Su, Jun Zhu, Zhifang Sui, Jiajun Zhang, Yang Liu, Xiaodong He, Minlie Huang, Jian Tang, Jie Tang
With the rapid development of deep learning, training Big Models (BMs) for multiple downstream tasks becomes a popular paradigm.
no code implementations • 22 Mar 2022 • Sha Yuan, Shuai Zhao, Jiahong Leng, Zhao Xue, Hanyu Zhao, Peiyu Liu, Zheng Gong, Wayne Xin Zhao, Junyi Li, Jie Tang
The results show that WuDaoMM can be applied as an efficient dataset for VLPMs, especially for the model in text-to-image generation task.
1 code implementation • 17 Mar 2022 • Yuxian Gu, Jiaxin Wen, Hao Sun, Yi Song, Pei Ke, Chujie Zheng, Zheng Zhang, Jianzhu Yao, Xiaoyan Zhu, Jie Tang, Minlie Huang
Large-scale pre-training has shown remarkable performance in building open-domain dialogue systems.
no code implementations • 14 Mar 2022 • Ning Ding, Yujia Qin, Guang Yang, Fuchao Wei, Zonghan Yang, Yusheng Su, Shengding Hu, Yulin Chen, Chi-Min Chan, Weize Chen, Jing Yi, Weilin Zhao, Xiaozhi Wang, Zhiyuan Liu, Hai-Tao Zheng, Jianfei Chen, Yang Liu, Jie Tang, Juanzi Li, Maosong Sun
This necessitates a new branch of research focusing on the parameter-efficient adaptation of PLMs, dubbed as delta tuning in this paper.
1 code implementation • 12 Mar 2022 • Wenzheng Feng, Yuxiao Dong, Tinglin Huang, Ziqi Yin, Xu Cheng, Evgeny Kharlamov, Jie Tang
In this work, we present a scalable and high-performance GNN framework GRAND+ for semi-supervised graph learning.
Ranked #1 on
Node Classification
on MAG-scholar-C
no code implementations • 8 Mar 2022 • Jibing Gong, Yao Wan, Ye Liu, Xuewen Li, Yi Zhao, Cheng Wang, Qing Li, Wenzheng Feng, Jie Tang
Specifically, we first formulate the concept recommendation in MOOCs as a reinforcement learning problem to better model the dynamic interaction among users and knowledge concepts.
1 code implementation • 2 Mar 2022 • Xiao Liu, Haoyun Hong, Xinghao Wang, Zeyi Chen, Evgeny Kharlamov, Yuxiao Dong, Jie Tang
We present SelfKG with efficient strategies to optimize this objective for aligning entities without label supervision.
1 code implementation • ACL 2022 • Jing Zhang, Xiaokang Zhang, Jifan Yu, Jian Tang, Jie Tang, Cuiping Li, Hong Chen
Recent works on knowledge base question answering (KBQA) retrieve subgraphs for easier reasoning.
1 code implementation • 14 Jan 2022 • Zhiyuan Liu, Yixin Cao, Fuli Feng, Xiang Wang, Jie Tang, Kenji Kawaguchi, Tat-Seng Chua
We present a framework of Training Free Graph Matching (TFGM) to boost the performance of Graph Neural Networks (GNNs) based graph matching, providing a fast promising solution without training (training-free).
no code implementations • CVPR 2022 • Chaojie Yang, Hanhui Li, Shengjie Wu, Shengkai Zhang, Haonan Yan, Nianhong Jiao, Jie Tang, Runnan Zhou, Xiaodan Liang, Tianxiang Zheng
This is because current methods mainly rely on a single pose/appearance model, which is limited in disentangling various poses and appearance in human images.
1 code implementation • 30 Dec 2021 • Qingsong Lv, Ming Ding, Qiang Liu, Yuxiang Chen, Wenzheng Feng, Siming He, Chang Zhou, Jianguo Jiang, Yuxiao Dong, Jie Tang
Heterogeneous graph neural networks (HGNNs) have been blossoming in recent years, but the unique data processing and evaluation setups used by each work obstruct a full understanding of their advancements.
4 code implementations • 8 Dec 2021 • Chenhui Zhang, Yufei He, Yukuo Cen, Zhenyu Hou, Wenzheng Feng, Yuxiao Dong, Xu Cheng, Hongyun Cai, Feng He, Jie Tang
However, it is unclear how to best design the generalization strategies in GNNs, as it works in a semi-supervised setting for graph data.
Ranked #3 on
Node Property Prediction
on ogbn-papers100M
no code implementations • NeurIPS 2021 • Yi Ma, Xiaotian Hao, Jianye Hao, Jiawen Lu, Xing Liu, Tong Xialiang, Mingxuan Yuan, Zhigang Li, Jie Tang, Zhaopeng Meng
To address this problem, existing methods partition the overall DPDP into fixed-size sub-problems by caching online generated orders and solve each sub-problem, or on this basis to utilize the predicted future orders to optimize each sub-problem further.
no code implementations • NeurIPS 2021 • Jialin Zhao, Yuxiao Dong, Ming Ding, Evgeny Kharlamov, Jie Tang
Notably, message passing based GNNs, e. g., graph convolutional networks, leverage the immediate neighbors of each node during the aggregation process, and recently, graph diffusion convolution (GDC) is proposed to expand the propagation neighborhood by leveraging generalized graph diffusion.
1 code implementation • 27 Nov 2021 • Jie Liu, Jie Tang, Gangshan Wu
We found that the standard deviation of the residual feature shrinks a lot after normalization layers, which causes the performance degradation in SR networks.
no code implementations • 21 Nov 2021 • Xueyi Liu, Jie Tang
Representation learning can facilitate the design of new algorithms on the graph data.
no code implementations • 15 Nov 2021 • Hanyu Zhao, Sha Yuan, Jiahong Leng, Xiang Pan, Guoqiang Wang, Ledell Wu, Jie Tang
Knowledge Base Question Answering (KBQA) aims to answer natural language questions with the help of an external knowledge base.
1 code implementation • 8 Nov 2021 • Qinkai Zheng, Xu Zou, Yuxiao Dong, Yukuo Cen, Da Yin, Jiarong Xu, Yang Yang, Jie Tang
To bridge this gap, we present the Graph Robustness Benchmark (GRB) with the goal of providing a scalable, unified, modular, and reproducible evaluation for the adversarial robustness of GML models.
2 code implementations • 14 Oct 2021 • Xiao Liu, Kaixuan Ji, Yicheng Fu, Weng Lam Tam, Zhengxiao Du, Zhilin Yang, Jie Tang
Prompt tuning, which only tunes continuous prompts with a frozen language model, substantially reduces per-task storage and memory usage at training.
1 code implementation • ACL 2022 • Yanan Zheng, Jing Zhou, Yujie Qian, Ming Ding, Chonghua Liao, Jian Li, Ruslan Salakhutdinov, Jie Tang, Sebastian Ruder, Zhilin Yang
The few-shot natural language understanding (NLU) task has attracted much recent attention.
1 code implementation • EMNLP 2021 • Chenguang Wang, Xiao Liu, Zui Chen, Haoyun Hong, Jie Tang, Dawn Song
We cast a suite of information extraction tasks into a text-to-triple translation framework.
Ranked #1 on
Open Information Extraction
on OIE2016
(using extra training data)
2 code implementations • 17 Aug 2021 • Bo Chen, Jing Zhang, Xiaokang Zhang, Yuxiao Dong, Jian Song, Peng Zhang, Kaibo Xu, Evgeny Kharlamov, Jie Tang
To achieve the contrastive objective, we design a graph neural network encoder that can infer and further remove suspicious links during message passing, as well as learn the global context of the input graph.
2 code implementations • 17 Aug 2021 • Yijia Xiao, Jiezhong Qiu, Ziang Li, Chang-Yu Hsieh, Jie Tang
The emergence of deep learning models makes modeling data patterns in large quantities of data possible.
1 code implementation • ACL 2022 • Jing Zhou, Yanan Zheng, Jie Tang, Jian Li, Zhilin Yang
Most previous methods for text data augmentation are limited to simple tasks and weak baselines.
2 code implementations • 3 Aug 2021 • Hao Zhou, Pei Ke, Zheng Zhang, Yuxian Gu, Yinhe Zheng, Chujie Zheng, Yida Wang, Chen Henry Wu, Hao Sun, Xiaocong Yang, Bosi Wen, Xiaoyan Zhu, Minlie Huang, Jie Tang
Although pre-trained language models have remarkably enhanced the generation ability of dialogue systems, open-domain Chinese dialogue systems are still limited by the dialogue data and the model size compared with English ones.
8 code implementations • 7 Jul 2021 • Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, Wojciech Zaremba
We introduce Codex, a GPT language model fine-tuned on publicly available code from GitHub, and study its Python code-writing capabilities.
Ranked #1 on
Multi-task Language Understanding
on BBH-alg
no code implementations • 28 Jun 2021 • Ingmar Kanitscheider, Joost Huizinga, David Farhi, William Hebgen Guss, Brandon Houghton, Raul Sampedro, Peter Zhokhov, Bowen Baker, Adrien Ecoffet, Jie Tang, Oleg Klimov, Jeff Clune
An important challenge in reinforcement learning is training agents that can solve a wide variety of tasks.
no code implementations • 22 Jun 2021 • Yinyu Jin, Sha Yuan, Zhou Shao, Wendy Hall, Jie Tang
The Turing Award is recognized as the most influential and prestigious award in the field of computer science(CS).
no code implementations • 19 Jun 2021 • Yushan Liu, Shun Zhang, Feifei Gao, Jie Tang, Octavia A. Dobre
Channel estimation is challenging for the reconfigurable intelligence surface (RIS) assisted millimeter wave (mmWave) communications.
1 code implementation • 17 Jun 2021 • Xiao Liu, Haoyun Hong, Xinghao Wang, Zeyi Chen, Evgeny Kharlamov, Yuxiao Dong, Jie Tang
We present SelfKG by leveraging this discovery to design a contrastive learning strategy across two KGs.
no code implementations • 14 Jun 2021 • Xu Han, Zhengyan Zhang, Ning Ding, Yuxian Gu, Xiao Liu, Yuqi Huo, Jiezhong Qiu, Yuan YAO, Ao Zhang, Liang Zhang, Wentao Han, Minlie Huang, Qin Jin, Yanyan Lan, Yang Liu, Zhiyuan Liu, Zhiwu Lu, Xipeng Qiu, Ruihua Song, Jie Tang, Ji-Rong Wen, Jinhui Yuan, Wayne Xin Zhao, Jun Zhu
Large-scale pre-trained models (PTMs) such as BERT and GPT have recently achieved great success and become a milestone in the field of artificial intelligence (AI).
1 code implementation • 12 Jun 2021 • Xu Zou, Qinkai Zheng, Yuxiao Dong, Xinyu Guan, Evgeny Kharlamov, Jialiang Lu, Jie Tang
In the GIA scenario, the adversary is not able to modify the existing link structure and node attributes of the input graph, instead the attack is performed by injecting adversarial nodes into it.
1 code implementation • 2 Jun 2021 • Diogo Almeida, Clemens Winter, Jie Tang, Wojciech Zaremba
A core issue with learning to optimize neural networks has been the lack of generalization to real world problems.
no code implementations • NeurIPS 2021 • Zhu Zhang, Jianxin Ma, Chang Zhou, Rui Men, Zhikang Li, Ming Ding, Jie Tang, Jingren Zhou, Hongxia Yang
Conditional image synthesis aims to create an image according to some multi-modal guidance in the forms of textual descriptions, reference images, and image blocks to preserve, as well as their combinations.
3 code implementations • NeurIPS 2021 • Ming Ding, Zhuoyi Yang, Wenyi Hong, Wendi Zheng, Chang Zhou, Da Yin, Junyang Lin, Xu Zou, Zhou Shao, Hongxia Yang, Jie Tang
Text-to-Image generation in the general domain has long been an open problem, which requires both a powerful generative model and cross-modal understanding.
Ranked #51 on
Text-to-Image Generation
on COCO
(using extra training data)
no code implementations • NeurIPS 2021 • Zhu Zhang, Jianxin Ma, Chang Zhou, Rui Men, Zhikang Li, Ming Ding, Jie Tang, Jingren Zhou, Hongxia Yang
Conditional image synthesis aims to create an image according to some multi-modal guidance in the forms of textual descriptions, reference images, and image blocks to preserve, as well as their combinations.
2 code implementations • 20 May 2021 • Zongcai Du, Jie Liu, Jie Tang, Gangshan Wu
Along with the rapid development of real-world applications, higher requirements on the accuracy and efficiency of image super-resolution (SR) are brought forward.
3 code implementations • 24 Mar 2021 • Jiaao He, Jiezhong Qiu, Aohan Zeng, Zhilin Yang, Jidong Zhai, Jie Tang
However, training trillion-scale MoE requires algorithm and system co-design for a well-tuned high performance distributed training system.
1 code implementation • 19 Mar 2021 • Xu Zou, Da Yin, Qingyang Zhong, Ming Ding, Hongxia Yang, Zhilin Yang, Jie Tang
To tackle this challenge, we propose an innovative method, inverse prompting, to better control text generation.
2 code implementations • ACL 2022 • Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, Jie Tang
On a wide range of tasks across NLU, conditional and unconditional generation, GLM outperforms BERT, T5, and GPT given the same model sizes and data, and achieves the best performance from a single pretrained model with 1. 25x parameters of BERT Large , demonstrating its generalizability to different downstream tasks.
Ranked #2 on
Document Summarization
on CNN / Daily Mail
5 code implementations • 18 Mar 2021 • Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, Jie Tang
On the SuperGlue benchmark, GPTs achieve comparable and sometimes better performance to similar-sized BERTs in supervised learning.
1 code implementation • 4 Mar 2021 • Fanjin Zhang, Jie Tang, Xueyi Liu, Zhenyu Hou, Yuxiao Dong, Jing Zhang, Xiao Liu, Ruobing Xie, Kai Zhuang, Xu Zhang, Leyu Lin, Philip S. Yu
"Top Stories" is a novel friend-enhanced recommendation engine in WeChat, in which users can read articles based on preferences of both their own and their friends.
Graph Representation Learning
Social and Information Networks
1 code implementation • 3 Mar 2021 • Xiao Liu, Da Yin, Jingnan Zheng, Xingjian Zhang, Peng Zhang, Hongxia Yang, Yuxiao Dong, Jie Tang
Academic knowledge services have substantially facilitated the development of the science enterprise by providing a plenitude of efficient research tools.
1 code implementation • 1 Mar 2021 • Yukuo Cen, Zhenyu Hou, Yan Wang, Qibin Chen, Yizhen Luo, Zhongming Yu, Hengrui Zhang, Xingcheng Yao, Aohan Zeng, Shiguang Guo, Yuxiao Dong, Yang Yang, Peng Zhang, Guohao Dai, Yu Wang, Chang Zhou, Hongxia Yang, Jie Tang
Deep learning on graphs has attracted tremendous attention from the graph learning community in recent years.
no code implementations • 1 Mar 2021 • Junyang Lin, Rui Men, An Yang, Chang Zhou, Ming Ding, Yichang Zhang, Peng Wang, Ang Wang, Le Jiang, Xianyan Jia, Jie Zhang, Jianwei Zhang, Xu Zou, Zhikang Li, Xiaodong Deng, Jie Liu, Jinbao Xue, Huiling Zhou, Jianxin Ma, Jin Yu, Yong Li, Wei Lin, Jingren Zhou, Jie Tang, Hongxia Yang
In this work, we construct the largest dataset for multimodal pretraining in Chinese, which consists of over 1. 9TB images and 292GB texts that cover a wide range of domains.
no code implementations • 1 Jan 2021 • Jiezhong Qiu, Yukuo Cen, Qibin Chen, Chang Zhou, Jingren Zhou, Hongxia Yang, Jie Tang
Based on the theoretical analysis, we propose Local Clustering Graph Neural Networks (LCGNN), a GNN learning paradigm that utilizes local clustering to efficiently search for small but compact subgraphs for GNN training and inference.
1 code implementation • 1 Jan 2021 • Jialin Zhao, Yuxiao Dong, Jie Tang, Ming Ding, Kuansan Wang
Graph convolutional networks (GCNs) have emerged as a powerful framework for mining and learning with graphs.
2 code implementations • 14 Dec 2020 • Bo Chen, Jing Zhang, Xiaokang Zhang, Xiaobin Tang, Lingfan Cai, Hong Chen, Cuiping Li, Peng Zhang, Jie Tang
In this paper, we propose CODE, which first pre-trains an expert linking model by contrastive learning on AMiner such that it can capture the representation and matching patterns of experts without supervised signals, then it is fine-tuned between AMiner and external sources to enhance the models transferability in an adversarial manner.
no code implementations • 2 Dec 2020 • Yiming Gan, Yu Bo, Boyuan Tian, Leimeng Xu, Wei Hu, Shaoshan Liu, Qiang Liu, Yanjun Zhang, Jie Tang, Yuhao Zhu
We develop and commercialize autonomous machines, such as logistic robots and self-driving cars, around the globe.
Self-Driving Cars
Hardware Architecture
no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Jifan Yu, Chenyu Wang, Gan Luo, Lei Hou, Juanzi Li, Jie Tang, Minlie Huang, Zhiyuan Liu
Within the prosperity of Massive Open Online Courses (MOOCs), the education applications that automatically provide extracurricular knowledge for MOOC users become rising research topics.
Hierarchical Reinforcement Learning
reinforcement-learning
+1
3 code implementations • 1 Dec 2020 • Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun
However, applying GPT-3 to address Chinese NLP tasks is still challenging, as the training corpus of GPT-3 is primarily English, and the parameters are not publicly available.
1 code implementation • NeurIPS 2020 • Wenzheng Feng, Jie Zhang, Yuxiao Dong, Yu Han, Huanbo Luan, Qian Xu, Qiang Yang, Evgeny Kharlamov, Jie Tang
We study the problem of semi-supervised learning on graphs, for which graph neural networks (GNNs) have been extensively explored.
1 code implementation • NeurIPS 2020 • Ming Ding, Chang Zhou, Hongxia Yang, Jie Tang
BERTs are incapable of processing long texts due to its quadratically increasing memory and time consumption.
2 code implementations • 24 Sep 2020 • Jie Liu, Jie Tang, Gangshan Wu
Thanks to FDC, we can rethink the information multi-distillation network (IMDN) and propose a lightweight and accurate SISR model called residual feature distillation network (RFDN).
3 code implementations • 15 Sep 2020 • Kai Zhang, Martin Danelljan, Yawei Li, Radu Timofte, Jie Liu, Jie Tang, Gangshan Wu, Yu Zhu, Xiangyu He, Wenjie Xu, Chenghua Li, Cong Leng, Jian Cheng, Guangyang Wu, Wenyi Wang, Xiaohong Liu, Hengyuan Zhao, Xiangtao Kong, Jingwen He, Yu Qiao, Chao Dong, Maitreya Suin, Kuldeep Purohit, A. N. Rajagopalan, Xiaochuan Li, Zhiqiang Lang, Jiangtao Nie, Wei Wei, Lei Zhang, Abdul Muqeet, Jiwon Hwang, Subin Yang, JungHeum Kang, Sung-Ho Bae, Yongwoo Kim, Geun-Woo Jeon, Jun-Ho Choi, Jun-Hyuk Kim, Jong-Seok Lee, Steven Marty, Eric Marty, Dongliang Xiong, Siang Chen, Lin Zha, Jiande Jiang, Xinbo Gao, Wen Lu, Haicheng Wang, Vineeth Bhaskara, Alex Levinshtein, Stavros Tsogkas, Allan Jepson, Xiangzhen Kong, Tongtong Zhao, Shanshan Zhao, Hrishikesh P. S, Densen Puthussery, Jiji C. V, Nan Nan, Shuai Liu, Jie Cai, Zibo Meng, Jiaming Ding, Chiu Man Ho, Xuehui Wang, Qiong Yan, Yuzhi Zhao, Long Chen, Jiangtao Zhang, Xiaotong Luo, Liang Chen, Yanyun Qu, Long Sun, Wenhao Wang, Zhenbing Liu, Rushi Lan, Rao Muhammad Umer, Christian Micheloni
This paper reviews the AIM 2020 challenge on efficient single image super-resolution with focus on the proposed solutions and results.
no code implementations • 13 Sep 2020 • Zishen Wan, Bo Yu, Thomas Yuang Li, Jie Tang, Yuhao Zhu, Yu Wang, Arijit Raychowdhury, Shaoshan Liu
On the other hand, FPGA-based robotic accelerators are becoming increasingly competitive alternatives, especially in latency-critical and power-limited scenarios.
no code implementations • NeurIPS 2020 • Jiezhong Qiu, Chi Wang, Ben Liao, Richard Peng, Jie Tang
Our result gives the first bound on the convergence rate of the co-occurrence matrix and the first sample complexity analysis in graph representation learning.
no code implementations • ACL 2020 • Jifan Yu, Gan Luo, Tong Xiao, Qingyang Zhong, Yuquan Wang, Wenzheng Feng, Junyi Luo, Chenyu Wang, Lei Hou, Juanzi Li, Zhiyuan Liu, Jie Tang
The prosperity of Massive Open Online Courses (MOOCs) provides fodder for many NLP and AI research for education applications, e. g., course concept extraction, prerequisite relation discovery, etc.
2 code implementations • 23 Jun 2020 • Shen Wang, Jibing Gong, Jinlong Wang, Wenzheng Feng, Hao Peng, Jie Tang, Philip S. Yu
To address this issue, we leverage both content information and context information to learn the representation of entities via graph convolution network.
no code implementations • 19 Jun 2020 • Haitham Al-Obiedollah, Kanapathippillai Cumanan, Jeyarajan Thiyagalingam, Jie Tang, Alister G. Burr, Zhiguo Ding, Octavia A. Dobre
In particular, we formulate a joint SE-EE based design as a multi-objective optimization (MOO) problem to achieve a good tradeoff between the two performance metrics.
4 code implementations • 17 Jun 2020 • Jiezhong Qiu, Qibin Chen, Yuxiao Dong, Jing Zhang, Hongxia Yang, Ming Ding, Kuansan Wang, Jie Tang
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
no code implementations • 15 Jun 2020 • Xiao Liu, Fanjin Zhang, Zhenyu Hou, Zhaoyu Wang, Li Mian, Jing Zhang, Jie Tang
As an alternative, self-supervised learning attracts many researchers for its soaring performance on representation learning in the last several years.
no code implementations • 27 May 2020 • Sha Yuan, Zhou Shao, Yu Zhang, Xingxing Wei, Tong Xiao, Yifan Wang, Jie Tang
In the progress of science, the previously discovered knowledge principally inspires new scientific ideas, and citation is a reasonably good reflection of this cumulative nature of scientific research.
6 code implementations • 22 May 2020 • Wenzheng Feng, Jie Zhang, Yuxiao Dong, Yu Han, Huanbo Luan, Qian Xu, Qiang Yang, Evgeny Kharlamov, Jie Tang
We study the problem of semi-supervised learning on graphs, for which graph neural networks (GNNs) have been extensively explored.
4 code implementations • 20 May 2020 • Zhen Yang, Ming Ding, Chang Zhou, Hongxia Yang, Jingren Zhou, Jie Tang
To the best of our knowledge, we are the first to derive the theory and quantify that the negative sampling distribution should be positively but sub-linearly correlated to their positive sampling distribution.
2 code implementations • 19 May 2020 • Yukuo Cen, Jianwei Zhang, Xu Zou, Chang Zhou, Hongxia Yang, Jie Tang
Recent works usually give an overall embedding from a user's behavior sequence.
no code implementations • 23 Mar 2020 • Yang Liu, Liang Chen, Xiangnan He, Jiaying Peng, Zibin Zheng, Jie Tang
The prevalence of online social network makes it compulsory to study how social relations affect user choice.
no code implementations • 19 Dec 2019 • Dong Zhang, Shu Zhao, Zhen Duan, Jie Chen, Yangping Zhang, Jie Tang
Paper-reviewer recommendation task is of significant academic importance for conference chairs and journal editors.
1 code implementation • 13 Dec 2019 • Christopher Berner, Greg Brockman, Brooke Chan, Vicki Cheung, Przemysław Dębiak, Christy Dennison, David Farhi, Quirin Fischer, Shariq Hashme, Chris Hesse, Rafal Józefowicz, Scott Gray, Catherine Olsson, Jakub Pachocki, Michael Petrov, Henrique Pondé de Oliveira Pinto, Jonathan Raiman, Tim Salimans, Jeremy Schlatter, Jonas Schneider, Szymon Sidor, Ilya Sutskever, Jie Tang, Filip Wolski, Susan Zhang
On April 13th, 2019, OpenAI Five became the first AI system to defeat the world champions at an esports game.
1 code implementation • 23 Nov 2019 • Zhe Zhang, Jie Tang, Gangshan Wu
Specifically, our LPN-50 can achieve 68. 7 in AP score on the COCO test-dev set, with only 2. 7M parameters and 1. 0 GFLOPs, while the inference speed is 17 FPS on an Intel i7-8700K CPU machine.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Jiezhong Qiu, Hao Ma, Omer Levy, Scott Wen-tau Yih, Sinong Wang, Jie Tang
We present BlockBERT, a lightweight and efficient BERT model for better modeling long-distance dependencies.
no code implementations • 25 Sep 2019 • Jie Zhang, Yuxiao Dong, Jie Tang
In this paper, we revisit the mathematical foundation of GCNs and study how to extend their representation capacity.
no code implementations • 25 Sep 2019 • Xu Zou, Qiuye Jia, Jianwei Zhang, Chang Zhou, Zijun Yao, Hongxia Yang, Jie Tang
In this paper, we propose a method named Dimensional reweighting Graph Convolutional Networks (DrGCNs), to tackle the problem of variance between dimensional information in the node representations of GCNs.
no code implementations • ACL 2019 • Jifan Yu, Chenyu Wang, Gan Luo, Lei Hou, Juanzi Li, Jie Tang, Zhiyuan Liu
As Massive Open Online Courses (MOOCs) become increasingly popular, it is promising to automatically provide extracurricular knowledge for MOOC users.
1 code implementation • IJCNLP 2019 • Qibin Chen, Junyang Lin, Yichang Zhang, Ming Ding, Yukuo Cen, Hongxia Yang, Jie Tang
In this paper, we propose a novel end-to-end framework called KBRD, which stands for Knowledge-Based Recommender Dialog System.
Ranked #5 on
Text Generation
on ReDial
1 code implementation • 3 Aug 2019 • Jie Tang, Fei-Peng Tian, Wei Feng, Jian Li, Ping Tan
It is thus necessary to complete the sparse LiDAR data, where a synchronized guidance RGB image is often used to facilitate this completion.
1 code implementation • 8 Jul 2019 • Xichen Ding, Jie Tang, Tracy Liu, Cheng Xu, Yaping Zhang, Feng Shi, Qixia Jiang, Dan Shen
Understanding users' context is essential for successful recommendations, especially for Online-to-Offline (O2O) recommendation, such as Yelp, Groupon, and Koubei.
2 code implementations • 4 Jul 2019 • Xu Zou, Qiuye Jia, Jianwei Zhang, Chang Zhou, Hongxia Yang, Jie Tang
Graph Convolution Networks (GCNs) are becoming more and more popular for learning node representations on graphs.
1 code implementation • 26 Jun 2019 • Jiezhong Qiu, Yuxiao Dong, Hao Ma, Jian Li, Chi Wang, Kuansan Wang, Jie Tang
Previous research shows that 1) popular network embedding benchmarks, such as DeepWalk, are in essence implicitly factorizing a matrix with a closed form, and 2)the explicit factorization of such matrix generates more powerful embeddings than existing methods.
no code implementations • 24 Jun 2019 • Yuan Yuan, Tracy Liu, Chenhao Tan, Qian Chen, Alex Pentland, Jie Tang
Using data on 36 million online red packet gifts on China's social site WeChat, we leverage a natural experimental design to identify the social contagion of gift giving in online groups.
1 code implementation • 22 Jun 2019 • Guangyong Chen, Pengfei Chen, Chang-Yu Hsieh, Chee-Kong Lee, Benben Liao, Renjie Liao, Weiwen Liu, Jiezhong Qiu, Qiming Sun, Jie Tang, Richard Zemel, Shengyu Zhang
We introduce a new molecular dataset, named Alchemy, for developing machine learning models useful in chemistry and material science.
1 code implementation • 13 Jun 2019 • Zhengxiao Du, Chang Zhou, Ming Ding, Hongxia Yang, Jie Tang
Inferring new facts from existing knowledge graphs (KG) with explainable reasoning processes is a significant problem and has received much attention recently.
1 code implementation • 2 Jun 2019 • Zhengxiao Du, Xiaowei Wang, Hongxia Yang, Jingren Zhou, Jie Tang
Our approach is based on the insight that having a good generalization from a few examples relies on both a generic model initialization and an effective strategy for adapting this model to newly arising tasks.
3 code implementations • ACL 2019 • Ming Ding, Chang Zhou, Qibin Chen, Hongxia Yang, Jie Tang
We propose a new CogQA framework for multi-hop question answering in web-scale documents.
Ranked #54 on
Question Answering
on HotpotQA
Multi-hop Question Answering
Multi-Hop Reading Comprehension
+1
4 code implementations • 5 May 2019 • Yukuo Cen, Xu Zou, Jianwei Zhang, Hongxia Yang, Jingren Zhou, Jie Tang
Network embedding (or graph embedding) has been widely used in many real-world applications.
Ranked #1 on
Link Prediction
on Amazon
4 code implementations • 29 Mar 2019 • Qibin Chen, Junyang Lin, Yichang Zhang, Hongxia Yang, Jingren Zhou, Jie Tang
In order to make the description both informative and personalized, KOBE considers a variety of important factors during text generation, including product aspects, user categories, and knowledge base, etc.
1 code implementation • 20 Feb 2019 • Fuli Feng, Xiangnan He, Jie Tang, Tat-Seng Chua
Adversarial Training (AT), a dynamic regularization technique, can resist the worst-case perturbations on input features and is a promising choice to improve model robustness and generalization.
Ranked #3 on
Node Classification
on NELL
1 code implementation • NeurIPS 2018 • Yi Qi, Qingyun Wu, Hongning Wang, Jie Tang, Maosong Sun
Implicit feedback, such as user clicks, although abundant in online information service systems, does not provide substantial evidence on users' evaluation of system's output.
no code implementations • 6 Nov 2018 • Sha Yuan, Jie Tang, Yu Zhang, Yifan Wang, Tong Xiao
The rapid evolution of scientific research has been creating a huge volume of publications every year.
Digital Libraries Physics and Society
no code implementations • 6 Nov 2018 • Sha Yuan, Yu Zhang, Jie Tang, Hua-Wei Shen, Xingxing Wei
Here we propose a deep learning attention mechanism to model the process through which individual items gain their popularity.
2 code implementations • 16 Oct 2018 • Xu Feng, Yuyang Xie, Mingye Song, Wenjian Yu, Jie Tang
The algorithm has similar accuracy to the basic randomized SVD (rPCA) algorithm (Halko et al., 2011), but is largely optimized for sparse data.
1 code implementation • 1 Sep 2018 • Ming Ding, Jie Tang, Jie Zhang
We first provide insights on working principles of adversarial learning over graphs and then present GraphSGAN, a novel approach to semi-supervised learning on graphs.
1 code implementation • 15 Jul 2018 • Jiezhong Qiu, Jian Tang, Hao Ma, Yuxiao Dong, Kuansan Wang, Jie Tang
Inspired by the recent success of deep neural networks in a wide range of computing applications, we design an end-to-end framework, DeepInf, to learn users' latent feature representation for predicting social influence.
1 code implementation • 7 Jun 2018 • Jie Zhang, Yan Wang, Jie Tang, Ming Ding
In this paper, we propose a $10\times \sim 100\times$ faster network embedding method, called Progle, by elegantly utilizing the sparsity property of online networks and spectral analysis.
no code implementations • 21 Apr 2018 • Sha Yuan, Yu Zhang, Jie Tang, Juan Bautista Cabotà
Moreover, we use innovative diagrams to clarify several important concepts of ensemble learning, and find that ensemble models with several specific single models can further boosting the performance.
no code implementations • 22 Feb 2018 • Jie Tang, Shaoshan Liu, Songwen Pei, Stephane Zuckerman, Chen Liu, Weisong Shi, Jean-Luc Gaudiot
Then, once the students have understood these modules, the experimental platforms for integration we have developed allow the students to fully understand how the modules interact with each other.
no code implementations • ICLR 2018 • Jiezhong Qiu, Hao Ma, Yuxiao Dong, Kuansan Wang, Jie Tang
We study the problem of knowledge base (KB) embedding, which is usually addressed through two frameworks---neural KB embedding and tensor decomposition.
no code implementations • IJCNLP 2017 • Liangming Pan, Xiaochen Wang, Chengjiang Li, Juanzi Li, Jie Tang
Massive Open Online Courses (MOOCs), offering a new way to study online, are revolutionizing education.
no code implementations • 13 Oct 2017 • Fang Zhang, Xiaochen Wang, Jingfei Han, Jie Tang, Shiyin Wang, Marie-Francine Moens
We leverage a large-scale knowledge base (Wikipedia) to generate topic embeddings using neural networks and use this kind of representations to help capture the representativeness of topics for given areas.
4 code implementations • 9 Oct 2017 • Jiezhong Qiu, Yuxiao Dong, Hao Ma, Jian Li, Kuansan Wang, Jie Tang
This work lays the theoretical foundation for skip-gram based network embedding methods, leading to a better understanding of latent network representation learning.
no code implementations • ACL 2017 • Liangming Pan, Chengjiang Li, Juanzi Li, Jie Tang
What prerequisite knowledge should students achieve a level of mastery before moving forward to learn subsequent coursewares?
no code implementations • 16 Apr 2017 • Shaoshan Liu, Bolin Ding, Jie Tang, Dawei Sun, Zhe Zhang, Grace Tsai, Jean-Luc Gaudiot
The rise of robotic applications has led to the generation of a huge volume of unstructured data, whereas the current cloud infrastructure was designed to process limited amounts of structured data.
no code implementations • 23 Feb 2017 • Yujie Qian, Jie Tang, Zhilin Yang, Binxuan Huang, Wei Wei, Kathleen M. Carley
In this paper, we formalize the problem of inferring location from social media into a semi-supervised factor graph model (SSFGM).
no code implementations • 14 Nov 2016 • Yujie Qian, Jie Tang, Kan Wu
The challenge is how to trade off the matching degree between users' expertise and the question topic, and the likelihood of positive response from the invited users.
44 code implementations • 5 Jun 2016 • Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, Wojciech Zaremba
OpenAI Gym is a toolkit for reinforcement learning research.
no code implementations • 12 Feb 2016 • Tai Wang, Xiangen Hu, Keith Shubeck, Zhiqiang Cai, Jie Tang
The relationship between reading and writing (RRW) is one of the major themes in learning science.
no code implementations • 15 Nov 2015 • Yikang Shen, Wenge Rong, Nan Jiang, Baolin Peng, Jie Tang, Zhang Xiong
With the development of community based question answering (Q&A) services, a large scale of Q&A archives have been accumulated and are an important information and knowledge resource on the web.
no code implementations • 4 Aug 2015 • Zhilin Yang, Jie Tang, William Cohen
GenVector leverages large-scale unlabeled data with embeddings and represents data of two modalities---i. e., social network users and knowledge concepts---in a shared latent topic space.
2 code implementations • 10 Apr 2015 • Jing Zhang, Jie Tang, Cong Ma, Hanghang Tong, Yu Jing, Juanzi Li
The algorithm is based on a novel idea of random path, and an extended method is also presented, to enhance the structural similarity when two vertices are completely disconnected.
Social and Information Networks
no code implementations • 14 Apr 2014 • Yuxiao Dong, Jie Tang, Nitesh Chawla, Tiancheng Lou, Yang Yang, Bai Wang
Our model can predict social status of individuals with 93% accuracy.