no code implementations • EMNLP 2021 • Zeru Zhang, Zijie Zhang, Yang Zhou, Lingfei Wu, Sixing Wu, Xiaoying Han, Dejing Dou, Tianshi Che, Da Yan
Recent literatures have shown that knowledge graph (KG) learning models are highly vulnerable to adversarial attacks.
1 code implementation • EMNLP 2021 • Manling Li, Tengfei Ma, Mo Yu, Lingfei Wu, Tian Gao, Heng Ji, Kathleen McKeown
Timeline Summarization identifies major events from a news collection and describes them following temporal order, with key dates tagged.
no code implementations • 22 Feb 2023 • Zhizhi Yu, Di Jin, Cuiying Huo, Zhiqiang Wang, Xiulong Liu, Heng Qi, Jia Wu, Lingfei Wu
Graph neural networks for trust evaluation typically adopt a straightforward way such as one-hot or node2vec to comprehend node characteristics, which ignores the valuable semantic knowledge attached to nodes.
no code implementations • 1 Jan 2023 • Hongru Yang, Yingbin Liang, Xiaojie Guo, Lingfei Wu, Zhangyang Wang
It is shown that as long as the pruning fraction is below a certain threshold, gradient descent can drive the training loss toward zero and the network exhibits good generalization performance.
no code implementations • 24 Dec 2022 • Cuiying Huo, Di Jin, Yawen Li, Dongxiao He, Yu-Bin Yang, Lingfei Wu
A key premise for the remarkable performance of GNNs relies on complete and trustworthy initial graph descriptions (i. e., node features and graph structure), which is often not satisfied since real-world graphs are often incomplete due to various unavoidable factors.
no code implementations • 1 Dec 2022 • Dongjie Wang, Lingfei Wu, Denghui Zhang, Jingbo Zhou, Leilei Sun, Yanjie Fu
The third stage is to leverage multi-attentions to model the zone-zone peer dependencies of the functionality projections to generate grid-level land-use configurations.
1 code implementation • 13 Nov 2022 • Jiajia Li, Feng Tan, Cheng He, Zikai Wang, Haitao Song, Lingfei Wu, Pengwei Hu
Modern IT system operation demands the integration of system software and hardware metrics.
no code implementations • 5 Nov 2022 • Hongmin Cai, Wenxiong Liao, Zhengliang Liu, Yiyang Zhang, Xiaoke Huang, Siqi Ding, Hui Ren, Zihao Wu, Haixing Dai, Sheng Li, Lingfei Wu, Ninghao Liu, Quanzheng Li, Tianming Liu, Xiang Li
In this framework, we apply distant-supervision on cross-domain knowledge graph adaptation.
no code implementations • 6 Oct 2022 • Peng Lin, Yanyan Zou, Lingfei Wu, Mian Ma, Zhuoye Ding, Bo Long
To conduct scene marketing for e-commerce platforms, this work presents a novel product form, scene-based topic channel which typically consists of a list of diverse products belonging to the same usage scenario and a topic title that describes the scenario with marketing words.
no code implementations • 4 Oct 2022 • Ziyang Liu, Chaokun Wang, Hao Feng, Lingfei Wu, Liqun Yang
In this paper, we design an efficient knowledge distillation framework for e-commerce relevance matching to integrate the respective advantages of Transformer-style models and classical relevance matching models.
1 code implementation • 26 Jun 2022 • Xiaochuan Fan, Chi Zhang, Yong Yang, Yue Shang, Xueying Zhang, Zhen He, Yun Xiao, Bo Long, Lingfei Wu
For a platform with billions of products, it is extremely time-costly and labor-expensive to manually pick and organize qualified images.
no code implementations • 22 Jun 2022 • Jiayin Jin, Zeru Zhang, Yang Zhou, Lingfei Wu
Theoretical analysis is conducted to derive that the Nemytskii operator is smooth and induces a Frechet differentiable smooth manifold.
1 code implementation • 21 Jun 2022 • Xiaojie Guo, Qingkai Zeng, Meng Jiang, Yun Xiao, Bo Long, Lingfei Wu
Automatic product description generation for e-commerce has witnessed significant advancement in the past decade.
no code implementations • 15 Jun 2022 • Zhizhi Yu, Di Jin, Jianguo Wei, Ziyang Liu, Yue Shang, Yun Xiao, Jiawei Han, Lingfei Wu
Graph Neural Networks (GNNs) have gained great popularity in tackling various analytical tasks on graph-structured data (i. e., networks).
1 code implementation • 4 Jun 2022 • Dong Chen, Lingfei Wu, Siliang Tang, Xiao Yun, Bo Long, Yueting Zhuang
Moreover, when handling the data with noisy labels, the meta-learner could be extremely sensitive to label noise on a corrupted dataset.
no code implementations • 25 May 2022 • Cuiying Huo, Di Jin, Chundong Liang, Dongxiao He, Tie Qiu, Lingfei Wu
In this work, we propose a new GNN based trust evaluation method named TrustGNN, which integrates smartly the propagative and composable nature of trust graphs into a GNN framework for better trust evaluation.
1 code implementation • 24 May 2022 • Zhendong Chu, Hongning Wang, Yun Xiao, Bo Long, Lingfei Wu
We propose to learn a meta policy and adapt it to new users with only a few trials of conversational recommendations.
no code implementations • 21 May 2022 • Xueying Zhang, Kai Shen, Chi Zhang, Xiaochuan Fan, Yun Xiao, Zhen He, Bo Long, Lingfei Wu
In this paper, we proposed an automatic Scenario-based Multi-product Advertising Copywriting Generation system (SMPACG) for E-Commerce, which has been deployed on a leading Chinese e-commerce platform.
no code implementations • 21 May 2022 • Yangkai Du, Tengfei Ma, Lingfei Wu, Yiming Wu, Xuhong Zhang, Bo Long, Shouling Ji
Towards real-world information extraction scenario, research of relation extraction is advancing to document-level relation extraction(DocRE).
Ranked #24 on
Relation Extraction
on DocRED
no code implementations • 30 Apr 2022 • Di Jin, Cuiying Huo, Jianwu Dang, Peican Zhu, Weixiong Zhang, Witold Pedrycz, Lingfei Wu
However, the existing contrastive learning methods are inadequate for heterogeneous graphs because they construct contrastive views only based on data perturbation or pre-defined structural properties (e. g., meta-path) in graph data while ignore the noises that may exist in both node attributes and graph topologies.
no code implementations • 29 Apr 2022 • Xiaoqiang Wang, Bang Liu, Siliang Tang, Lingfei Wu
Existing metrics for assessing question generation not only require costly human reference but also fail to take into account the input context of generation, rendering the lack of deep understanding of the relevance between the generated questions and input contexts.
no code implementations • 24 Apr 2022 • Yutong Qu, Wei Emma Zhang, Jian Yang, Lingfei Wu, Jia Wu
Knowledge-aware methods have boosted a range of natural language processing applications over the last decades.
no code implementations • ACL 2022 • Xiaoqiang Wang, Bang Liu, Fangli Xu, Bo Long, Siliang Tang, Lingfei Wu
In this paper, we argue that a deep understanding of model capabilities and data properties can help us feed a model with appropriate training data based on its learning status.
no code implementations • 1 Feb 2022 • Dadong Miao, Yanan Wang, Guoyu Tang, Lin Liu, Sulong Xu, Bo Long, Yun Xiao, Lingfei Wu, Yunjiang Jiang
Recent years have seen a significant amount of interests in Sequential Recommendation (SR), which aims to understand and model the sequential user behaviors and the interactions between users and items over time.
1 code implementation • 14 Jan 2022 • Nian Liu, Xiao Wang, Lingfei Wu, Yu Chen, Xiaojie Guo, Chuan Shi
Furthermore, we maintain the performance of estimated views and the final view and reduce the mutual information of every two views.
1 code implementation • 23 Dec 2021 • Xiang Ling, Lingfei Wu, Jiangyu Zhang, Zhenqing Qu, Wei Deng, Xiang Chen, Yaguan Qian, Chunming Wu, Shouling Ji, Tianyue Luo, Jingzheng Wu, Yanjun Wu
Then, we conduct a comprehensive and systematic review to categorize the state-of-the-art adversarial attacks against PE malware detection, as well as corresponding defenses to increase the robustness of Windows PE malware detection.
1 code implementation • 22 Dec 2021 • Yiming Zhang, Lingfei Wu, Qi Shen, Yitong Pang, Zhihua Wei, Fangli Xu, Bo Long, Jian Pei
As a result, we first propose a more realistic CRS learning setting, namely Multi-Interest Multi-round Conversational Recommendation, where users may have multiple interests in attribute instance combinations and accept multiple items with partially overlapped combinations of attribute instances.
no code implementations • 16 Dec 2021 • Xiaojie Guo, Shugen Wang, Hanqing Zhao, Shiliang Diao, Jiajia Chen, Zhuoye Ding, Zhen He, Yun Xiao, Bo Long, Han Yu, Lingfei Wu
In addition, this kind of product description should be eye-catching to the readers.
no code implementations • 15 Dec 2021 • Xueying Zhang, Yanyan Zou, Hainan Zhang, Jing Zhou, Shiliang Diao, Jiajia Chen, Zhuoye Ding, Zhen He, Xueqi He, Yun Xiao, Bo Long, Han Yu, Lingfei Wu
It consists of two main components: 1) natural language generation, which is built from a transformer-pointer network and a pre-trained sequence-to-sequence model based on millions of training data from our in-house platform; and 2) copywriting quality control, which is based on both automatic evaluation and human screening.
no code implementations • NeurIPS 2021 • Zeru Zhang, Jiayin Jin, Zijie Zhang, Yang Zhou, Xin Zhao, Jiaxiang Ren, Ji Liu, Lingfei Wu, Ruoming Jin, Dejing Dou
Despite achieving remarkable efficiency, traditional network pruning techniques often follow manually-crafted heuristics to generate pruned sparse networks.
1 code implementation • NeurIPS 2021 • Shen Kai, Lingfei Wu, Siliang Tang, Yueting Zhuang, Zhen He, Zhuoye Ding, Yun Xiao, Bo Long
The task of visual question generation (VQG) aims to generate human-like neural questions from an image and potentially other side information (e. g., answer type or the answer itself).
no code implementations • 20 Nov 2021 • Hanning Gao, Lingfei Wu, Po Hu, Zhihua Wei, Fangli Xu, Bo Long
Finally, we apply an answer selection model on the full KSG and the top-ranked sub-KSGs respectively to validate the effectiveness of our proposed graph-augmented learning to rank method.
no code implementations • 20 Nov 2021 • Hanning Gao, Lingfei Wu, Hongyun Zhang, Zhihua Wei, Po Hu, Fangli Xu, Bo Long
Most previous methods solve this task using a sequence-to-sequence model or using a graph-based model to encode RDF triples and to generate a text sequence.
no code implementations • 24 Sep 2021 • Yiming Zhang, Lingfei Wu, Qi Shen, Yitong Pang, Zhihua Wei, Fangli Xu, Ethan Chang, Bo Long
In this work, we propose an end-to-end heterogeneous global graph learning framework, namely Graph Learning Augmented Heterogeneous Graph Neural Network (GL-HGNN) for social recommendation.
no code implementations • 24 Sep 2021 • Qi Shen, Lingfei Wu, Yitong Pang, Yiming Zhang, Zhihua Wei, Fangli Xu, Bo Long
Based on the global graph, MGCNet attaches the global interest representation to final item representation based on local contextual intention to address the limitation (iii).
no code implementations • 21 Jul 2021 • Zhiqian Chen, Fanglan Chen, Lei Zhang, Taoran Ji, Kaiqun Fu, Liang Zhao, Feng Chen, Lingfei Wu, Charu Aggarwal, Chang-Tien Lu
Deep learning's performance has been extensively recognized recently.
1 code implementation • 8 Jul 2021 • Yitong Pang, Lingfei Wu, Qi Shen, Yiming Zhang, Zhihua Wei, Fangli Xu, Ethan Chang, Bo Long, Jian Pei
Additionally, existing personalized session-based recommenders capture user preference only based on the sessions of the current user, but ignore the useful item-transition patterns from other user's historical sessions.
1 code implementation • 10 Jun 2021 • Lingfei Wu, Yu Chen, Kai Shen, Xiaojie Guo, Hanning Gao, Shucheng Li, Jian Pei, Bo Long
Deep learning has become the dominant approach in coping with various tasks in Natural LanguageProcessing (NLP).
no code implementations • NAACL 2021 • Lingfei Wu, Yu Chen, Heng Ji, Yunyao Li
Due to its great power in modeling non-Euclidean data like graphs or manifolds, deep learning on graph techniques (i. e., Graph Neural Networks (GNNs)) have opened a new door to solving challenging graph-related NLP problems.
1 code implementation • Findings (EMNLP) 2021 • Yangkai Du, Tengfei Ma, Lingfei Wu, Fangli Xu, Xuhong Zhang, Bo Long, Shouling Ji
Unlike vision tasks, the data augmentation method for contrastive learning has not been investigated sufficiently in language tasks.
2 code implementations • Findings (EMNLP) 2021 • Xuye Liu, Dakuo Wang, April Wang, Yufang Hou, Lingfei Wu
Jupyter notebook allows data scientists to write machine learning code together with its documentation in cells.
no code implementations • 14 Feb 2021 • Xiao Qin, Nasrullah Sheikh, Berthold Reinwald, Lingfei Wu
Furthermore, the expressivity of the learned representation depends on the quality of negative samples used during training.
1 code implementation • 27 Jan 2021 • Di Tong, Lingfei Wu, James Allen Evans
Substantial scholarship has estimated the susceptibility of jobs to automation, but little has examined how job contents evolve in the information age as new technologies substitute for tasks, shifting required skills rather than eliminating entire jobs.
no code implementations • 11 Jan 2021 • Aakash Bansal, Zachary Eberhart, Lingfei Wu, Collin McMillan
In this paper, we take initial steps to bringing state-of-the-art neural QA technologies to Software Engineering applications by designing a context-based QA system for basic questions about subroutines.
no code implementations • 1 Jan 2021 • Dong Chen, Lingfei Wu, Siliang Tang, Fangli Xu, Juncheng Li, Chang Zong, Chilie Tan, Yueting Zhuang
In particular, we first cast the meta-overfitting problem (overfitting on sampling and label noise) as a gradient noise problem since few available samples cause meta-learner to overfit on existing examples (clean or corrupted) of an individual task at every gradient step.
no code implementations • 1 Jan 2021 • Chengyue Huang, Lingfei Wu, Yadong Ding, Siliang Tang, Fangli Xu, Chang Zong, Chilie Tan, Yueting Zhuang
To this end, we learn a differentiable graph neural network as a surrogate model to rank candidate architectures, which enable us to obtain gradient w. r. t the input architectures.
no code implementations • 1 Jan 2021 • Shen Kai, Lingfei Wu, Siliang Tang, Fangli Xu, Zhu Zhang, Yu Qiang, Yueting Zhuang
The task of visual question generation~(VQG) aims to generate human-like questions from an image and potentially other side information (e. g. answer type or the answer itself).
no code implementations • 1 Jan 2021 • Xiang Ling, Lingfei Wu, Saizhuo Wang, Tengfei Ma, Fangli Xu, Alex X. Liu, Chunming Wu, Shouling Ji
The proposed MGMN model consists of a node-graph matching network for effectively learning cross-level interactions between nodes of a graph and the other whole graph, and a siamese graph neural network to learn global-level interactions between two graphs.
no code implementations • 25 Oct 2020 • Hanlu Wu, Tengfei Ma, Lingfei Wu, Shouling Ji
Besides, we exploit the unknown latent interaction between the same type of nodes (workers or tasks) by adding a homogeneous attention layer in the graph neural networks.
no code implementations • 24 Oct 2020 • Xiang Ling, Lingfei Wu, Saizhuo Wang, Gaoning Pan, Tengfei Ma, Fangli Xu, Alex X. Liu, Chunming Wu, Shouling Ji
To this end, we first represent both natural language query texts and programming language code snippets with the unified graph-structured data, and then use the proposed graph matching and searching model to retrieve the best matching code snippet.
no code implementations • 22 Oct 2020 • Devendra Singh Sachan, Lingfei Wu, Mrinmaya Sachan, William Hamilton
In this work, we introduce a series of strong transformer models for multi-hop question generation, including a graph-augmented transformer that leverages relations between entities in the text.
1 code implementation • NAACL 2021 • Wenhao Yu, Lingfei Wu, Yu Deng, Qingkai Zeng, Ruchi Mahindru, Sinem Guven, Meng Jiang
In this paper, we propose a novel framework of deep transfer learning to effectively address technical QA across tasks and domains.
1 code implementation • EMNLP 2020 • Hanlu Wu, Tengfei Ma, Lingfei Wu, Tariro Manyumwa, Shouling Ji
Experiments on Newsroom and CNN/Daily Mail demonstrate that our new evaluation method outperforms other metrics even without reference summaries.
1 code implementation • EMNLP 2020 • Wenhao Yu, Lingfei Wu, Yu Deng, Ruchi Mahindru, Qingkai Zeng, Sinem Guven, Meng Jiang
In recent years, the need for community technical question-answering sites has increased significantly.
1 code implementation • 8 Jul 2020 • Xiang Ling, Lingfei Wu, Saizhuo Wang, Tengfei Ma, Fangli Xu, Alex X. Liu, Chunming Wu, Shouling Ji
In particular, the proposed MGMN consists of a node-graph matching network for effectively learning cross-level interactions between each node of one graph and the other whole graph, and a siamese graph neural network to learn global-level interactions between two input graphs.
no code implementations • ACL 2020 • Ying Lin, Heng Ji, Fei Huang, Lingfei Wu
OneIE performs end-to-end IE in four stages: (1) Encoding a given sentence as contextualized word representations; (2) Identifying entity mentions and event triggers as nodes; (3) Computing label scores for all nodes and their pairwise links using local classifiers; (4) Searching for the globally optimal graph with a beam decoder.
1 code implementation • NeurIPS 2020 • Yu Chen, Lingfei Wu, Mohammed J. Zaki
In this paper, we propose an end-to-end graph learning framework, namely Iterative Deep Graph Learning (IDGL), for jointly and iteratively learning graph structure and graph embedding.
1 code implementation • 9 Jun 2020 • Xiaojie Guo, Liang Zhao, Zhao Qin, Lingfei Wu, Amarda Shehu, Yanfang Ye
Disentangled representation learning has recently attracted a significant amount of attention, particularly in the field of image representation learning.
no code implementations • ACL 2020 • Rajarshi Haldar, Lingfei Wu, JinJun Xiong, Julia Hockenmaier
The ability to match pieces of code to their corresponding natural language descriptions and vice versa is fundamental for natural language search interfaces to software repositories.
no code implementations • ACL 2020 • Wenhao Yu, Lingfei Wu, Qingkai Zeng, Shu Tao, Yu Deng, Meng Jiang
Existing methods learned semantic representations with dual encoders or dual variational auto-encoders.
1 code implementation • ACL 2020 • Luyang Huang, Lingfei Wu, Lu Wang
Sequence-to-sequence models for abstractive summarization have been studied extensively, yet the generated summaries commonly suffer from fabricated content, and are often found to be near-extractive.
1 code implementation • 13 Apr 2020 • Yu Chen, Lingfei Wu, Mohammed J. Zaki
In this work, we focus on a more realistic setting where we aim to generate questions from a KG subgraph and target answers.
Ranked #3 on
KG-to-Text Generation
on WebQuestions
1 code implementation • 10 Apr 2020 • Sakib Haque, Alexander LeClair, Lingfei Wu, Collin McMillan
In this paper, we present an approach that models the file context of subroutines (i. e. other subroutines in the same file) and uses an attention mechanism to find words and concepts to use in summaries.
Software Engineering
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Shucheng Li, Lingfei Wu, Shiwei Feng, Fangli Xu, Fengyuan Xu, Sheng Zhong
In particular, we investigated our model for solving two problems, neural semantic parsing and math word problem.
2 code implementations • 6 Apr 2020 • Alexander LeClair, Sakib Haque, Lingfei Wu, Collin McMillan
The first approaches to use structural information flattened the AST into a sequence.
no code implementations • 27 Feb 2020 • Zhiqian Chen, Fanglan Chen, Lei Zhang, Taoran Ji, Kaiqun Fu, Liang Zhao, Feng Chen, Lingfei Wu, Charu Aggarwal, Chang-Tien Lu
Deep learning's success has been widely recognized in a variety of machine learning tasks, including image classification, audio recognition, and natural language processing.
1 code implementation • 17 Dec 2019 • Yu Chen, Lingfei Wu, Mohammed J. Zaki
In this paper, we propose an end-to-end graph learning framework, namely Deep Iterative and Adaptive Learning for Graph Neural Networks (DIAL-GNN), for jointly learning the graph structure and graph embeddings simultaneously.
no code implementations • 25 Nov 2019 • Lingfei Wu, Ian En-Hsu Yen, Zhen Zhang, Kun Xu, Liang Zhao, Xi Peng, Yinglong Xia, Charu Aggarwal
In particular, RGE is shown to achieve \emph{(quasi-)linear scalability} with respect to the number and the size of the graphs.
1 code implementation • NeurIPS 2019 • Zhen Zhang, Yijian Xiang, Lingfei Wu, Bing Xue, Arye Nehorai
Graph matching plays a central role in such fields as computer vision, pattern recognition, and bioinformatics.
no code implementations • 25 Nov 2019 • Lingfei Wu, Ian En-Hsu Yen, Siyu Huo, Liang Zhao, Kun Xu, Liang Ma, Shouling Ji, Charu Aggarwal
In this paper, we present a new class of global string kernels that aims to (i) discover global properties hidden in the strings through global alignments, (ii) maintain positive-definiteness of the kernel, without introducing a diagonal dominant kernel matrix, and (iii) have a training cost linear with respect to not only the length of the string but also the number of training string samples.
1 code implementation • arXiv 2020 • Maxwell Crouse, Ibrahim Abdelaziz, Cristina Cornelio, Veronika Thost, Lingfei Wu, Kenneth Forbus, Achille Fokoue
Recent advances in the integration of deep learning with automated theorem proving have centered around the representation of logical formulae as inputs to deep learning systems.
Ranked #1 on
Automated Theorem Proving
on HolStep (Conditional)
no code implementations • WS 2019 • Siyu Huo, Tengfei Ma, Jie Chen, Maria Chang, Lingfei Wu, Michael Witbrock
Semantic parsing is a fundamental problem in natural language understanding, as it involves the mapping of natural language to structured forms such as executable queries or logic-like knowledge representations.
1 code implementation • 19 Oct 2019 • Yu Chen, Lingfei Wu, Mohammed J. Zaki
Natural question generation (QG) aims to generate questions from a passage and an answer.
no code implementations • 5 Oct 2019 • Fangli Xu, Lingfei Wu, KP Thai, Carol Hsu, Wei Wang, Richard Tong
Automatic analysis of teacher and student interactions could be very important to improve the quality of teaching and student engagement.
no code implementations • 25 Sep 2019 • Yu Chen, Lingfei Wu, Mohammed J. Zaki
In this paper, we propose an end-to-end graph learning framework, namely Iterative Deep Graph Learning (IDGL), for jointly learning graph structure and graph embedding simultaneously.
no code implementations • 25 Sep 2019 • Xiang Ling, Lingfei Wu, Saizhuo Wang, Tengfei Ma, Fangli Xu, Chunming Wu, Shouling Ji
The proposed HGMN model consists of a multi-perspective node-graph matching network for effectively learning cross-level interactions between parts of a graph and a whole graph, and a siamese graph neural network for learning global-level interactions between two graphs.
1 code implementation • 26 Aug 2019 • Xiaojie Guo, Amir Alipour-Fanid, Lingfei Wu, Hemant Purohit, Xiang Chen, Kai Zeng, Liang Zhao
At present, object recognition studies are mostly conducted in a closed lab setting with classes in test phase typically in training phase.
no code implementations • 22 Aug 2019 • Yuyang Gao, Lingfei Wu, Houman Homayoun, Liang Zhao
In this paper, we first formulate the transition of user activities as a dynamic graph with multi-attributed nodes, then formalize the health stage inference task as a dynamic graph-to-sequence learning problem, and hence propose a novel dynamic graph-to-sequence neural networks architecture (DynGraph2Seq) to address all the challenges.
1 code implementation • ICLR 2020 • Yu Chen, Lingfei Wu, Mohammed J. Zaki
Natural question generation (QG) aims to generate questions from a passage and an answer.
1 code implementation • 31 Jul 2019 • Yu Chen, Lingfei Wu, Mohammed J. Zaki
The proposed GraphFlow model can effectively capture conversational flow in a dialog, and shows competitive performance compared to existing state-of-the-art methods on CoQA, QuAC and DoQA benchmarks.
no code implementations • 10 Jun 2019 • Yao Ma, Suhang Wang, Tyler Derr, Lingfei Wu, Jiliang Tang
Graph Neural Networks (GNNs) have boosted the performance of many graph related tasks such as node classification and graph classification.
no code implementations • NAACL 2019 • Hongyu Gong, Suma Bhat, Lingfei Wu, JinJun Xiong, Wen-mei Hwu
Our generator employs an attention-based encoder-decoder to transfer a sentence from the source style to the target style.
2 code implementations • NAACL 2019 • Yu Chen, Lingfei Wu, Mohammed J. Zaki
When answering natural language questions over knowledge bases (KBs), different question components and KB aspects play different roles.
no code implementations • 9 Jan 2019 • Maxwell Crouse, Achille Fokoue, Maria Chang, Pavan Kapanipathi, Ryan Musa, Constantine Nakos, Lingfei Wu, Kenneth Forbus, Michael Witbrock
Machine learning systems regularly deal with structured data in real-world applications.
no code implementations • NIPS 2018 2018 • Lingfei Wu, Ian En-Hsu Yen, Kun Xu, Liang Zhao, Yinglong Xia, Michael Witbrock
Graph kernels are one of the most important methods for graph data analysis and have been successfully applied in diverse applications.
1 code implementation • 1 Dec 2018 • Qi Lei, Lingfei Wu, Pin-Yu Chen, Alexandros G. Dimakis, Inderjit S. Dhillon, Michael Witbrock
In this paper we formulate the attacks with discrete input on a set function as an optimization task.
1 code implementation • 12 Nov 2018 • Huimin Xu, Zhang Zhang, Lingfei Wu, Cheng-Jun Wang
Our analysis of thousands of movies and books reveals how these cultural products weave stereotypical gender roles into morality tales and perpetuate gender inequality through storytelling.
1 code implementation • EMNLP 2018 • Lingfei Wu, Ian E. H. Yen, Kun Xu, Fangli Xu, Avinash Balakrishnan, Pin-Yu Chen, Pradeep Ravikumar, Michael J. Witbrock
While the celebrated Word2Vec technique yields semantically rich representations for individual words, there has been relatively less success in extending to generate unsupervised sentences or documents embeddings.
1 code implementation • 14 Sep 2018 • Lingfei Wu, Ian En-Hsu Yen, Jin-Feng Yi, Fangli Xu, Qi Lei, Michael Witbrock
The proposed kernel does not suffer from the issue of diagonal dominance while naturally enjoys a \emph{Random Features} (RF) approximation, which reduces the computational complexity of existing DTW-based techniques from quadratic to linear in terms of both the number and the length of time-series.
2 code implementations • 14 Sep 2018 • Lingfei Wu, Ian E. H. Yen, Jie Chen, Rui Yan
We thus propose the first analysis of RB from the perspective of optimization, which by interpreting RB as a Randomized Block Coordinate Descent in the infinite-dimensional space, gives a faster convergence rate compared to that of other random features.
1 code implementation • EMNLP 2018 • Kun Xu, Lingfei Wu, Zhiguo Wang, Yansong Feng, Vadim Sheinin
Previous work approaches the SQL-to-text generation task using vanilla Seq2Seq models, which may not fully capture the inherent graph-structured information in SQL query.
1 code implementation • EMNLP 2018 • Kun Xu, Lingfei Wu, Zhiguo Wang, Mo Yu, Li-Wei Chen, Vadim Sheinin
Existing neural semantic parsers mainly utilize a sequence encoder, i. e., a sequential LSTM, to extract word order features while neglecting other valuable syntactic information such as dependency graph or constituent trees.
1 code implementation • ECCV 2018 • Zhiqiang Tang, Xi Peng, Shijie Geng, Lingfei Wu, Shaoting Zhang, Dimitris Metaxas
Finally, to reduce the memory consumption and high precision operations both in training and testing, we further quantize weights, inputs, and gradients of our localization network to low bit-width numbers.
Ranked #17 on
Pose Estimation
on MPII Human Pose
1 code implementation • 30 May 2018 • Pin-Yu Chen, Lingfei Wu, Sijia Liu, Indika Rajapakse
The von Neumann graph entropy (VNGE) facilitates measurement of information divergence and distance between graphs in a graph sequence.
2 code implementations • 25 May 2018 • Xiaojie Guo, Lingfei Wu, Liang Zhao
To achieve this, we propose a novel Graph-Translation-Generative Adversarial Networks (GT-GAN) which will generate a graph translator from input to target graphs.
1 code implementation • 25 May 2018 • Lingfei Wu, Pin-Yu Chen, Ian En-Hsu Yen, Fangli Xu, Yinglong Xia, Charu Aggarwal
Moreover, our method exhibits linear scalability in both the number of data samples and the number of RB features.
Ranked #5 on
Image/Document Clustering
on pendigits
5 code implementations • ICLR 2019 • Kun Xu, Lingfei Wu, Zhiguo Wang, Yansong Feng, Michael Witbrock, Vadim Sheinin
Our method first generates the node and graph embeddings using an improved graph-based neural network with a novel aggregation strategy to incorporate edge direction information in the node embeddings.
Ranked #1 on
SQL-to-Text
on WikiSQL
no code implementations • 14 Feb 2018 • Lingfei Wu, Ian En-Hsu Yen, Fangli Xu, Pradeep Ravikumar, Michael Witbrock
For many machine learning problem settings, particularly with structured inputs such as sequences or sets of objects, a distance measure between inputs can be specified more naturally than a feature representation.
no code implementations • 14 Sep 2017 • Pin-Yu Chen, Lingfei Wu
The presented method, SGC-GEN, not only considers the detection error caused by the corresponding model mismatch to a given graph, but also yields a theoretical guarantee on community detectability by analyzing Spectral Graph Clustering (SGC) under GENerative community models (GCMs).
1 code implementation • 7 Sep 2017 • Lingfei Wu, Dashun Wang, James A. Evans
Teams dominate the production of high-impact science and technology.
Physics and Society Digital Libraries Social and Information Networks
no code implementations • 12 Feb 2017 • Qi Lei, Jin-Feng Yi, Roman Vaculin, Lingfei Wu, Inderjit S. Dhillon
A considerable amount of clustering algorithms take instance-feature matrices as their inputs.