no code implementations • ACL 2022 • Chenguang Zhu, Yichong Xu, Xiang Ren, Bill Lin, Meng Jiang, Wenhao Yu
Knowledge in natural language processing (NLP) has been a rising trend especially after the advent of large scale pre-trained models.
1 code implementation • EMNLP (ACL) 2021 • Wenhao Yu, Meng Jiang, Zhiting Hu, Qingyun Wang, Heng Ji, Nazneen Rajani
Knowledge-enriched text generation poses unique challenges in modeling and learning, driving active research in several core directions, ranging from integrated modeling of neural representations and symbolic information in the sequential/hierarchical/graphical structures, learning without direct supervisions due to the cost of structured annotation, efficient optimization and inference with massive and global constraints, to language grounding on multiple modalities, and generative reasoning with implicit commonsense knowledge and background knowledge.
no code implementations • 30 Oct 2024 • Tianyu Yang, Lisen Dai, Zheyuan Liu, Xiangqi Wang, Meng Jiang, Yapeng Tian, Xiangliang Zhang
Machine unlearning (MU) has gained significant attention as a means to remove specific data from trained models without requiring a full retraining process.
1 code implementation • 29 Oct 2024 • Zheyuan Liu, Guangyao Dou, Mengzhao Jia, Zhaoxuan Tan, Qingkai Zeng, Yongle Yuan, Meng Jiang
Generative models such as Large Language Models (LLM) and Multimodal Large Language models (MLLMs) trained on massive web corpora can memorize and disclose individuals' confidential and private data, raising legal and ethical concerns.
1 code implementation • 28 Oct 2024 • Yilun Jin, Zheng Li, Chenwei Zhang, Tianyu Cao, Yifan Gao, Pratik Jayarao, Mao Li, Xin Liu, Ritesh Sarkhel, Xianfeng Tang, Haodong Wang, Zhengyang Wang, Wenju Xu, Jingfeng Yang, Qingyu Yin, Xian Li, Priyanka Nigam, Yi Xu, Kai Chen, Qiang Yang, Meng Jiang, Bing Yin
Shopping MMLU consists of 57 tasks covering 4 major shopping skills: concept understanding, knowledge reasoning, user behavior alignment, and multi-linguality, and can thus comprehensively evaluate the abilities of LLMs as general shop assistants.
no code implementations • 18 Oct 2024 • Zifeng Zhu, Mengzhao Jia, Zhihan Zhang, Lang Li, Meng Jiang
Multimodal Large Language Models (MLLMs) have demonstrated impressive abilities across various tasks, including visual question answering and chart comprehension, yet existing benchmarks for chart-related tasks fall short in capturing the complexity of real-world multi-chart scenarios.
no code implementations • 16 Oct 2024 • Zhenyu Wu, Qingkai Zeng, Zhihan Zhang, Zhaoxuan Tan, Chao Shen, Meng Jiang
Best-of-N decoding methods instruct large language models (LLMs) to generate multiple solutions, score each using a scoring function, and select the highest scored as the final answer to mathematical reasoning problems.
no code implementations • 14 Oct 2024 • Wei Zhai, Nan Bai, Qing Zhao, Jianqiang Li, Fan Wang, Hongzhi Qi, Meng Jiang, Xiaoqin Wang, Bing Xiang Yang, Guanghui Fu
The proposed models were evaluated on three downstream tasks and achieved better or comparable performance compared to deep learning models, generalized LLMs, and task fine-tuned LLMs.
no code implementations • 8 Oct 2024 • Noah Ziems, Zhihan Zhang, Meng Jiang
Evaluating the ability of large language models (LLMs) to follow complex human-written instructions is essential for their deployment in real-world applications.
1 code implementation • 5 Oct 2024 • Gang Liu, Michael Sun, Wojciech Matusik, Meng Jiang, Jie Chen
While large language models (LLMs) have integrated images, adapting them to graphs remains challenging, limiting their applications in materials and drug design.
1 code implementation • 2 Oct 2024 • Mengzhao Jia, Wenhao Yu, Kaixin Ma, Tianqing Fang, Zhihan Zhang, Siru Ouyang, Hongming Zhang, Meng Jiang, Dong Yu
Tasks involving multiple text-rich images are especially challenging, as they require not only understanding the content of individual images but reasoning about inter-relationships and logical flows across multiple visual inputs.
1 code implementation • 17 Aug 2024 • Qingkai Zeng, Yuyang Bai, Zhaoxuan Tan, Zhenyu Wu, Shangbin Feng, Meng Jiang
Taxonomies play a crucial role in various applications by providing a structural representation of knowledge.
1 code implementation • 30 Jul 2024 • Zheyuan Liu, Guangyao Dou, Zhaoxuan Tan, Yijun Tian, Meng Jiang
We offer a comprehensive survey on many things about MU in Generative AI, such as a new problem formulation, evaluation methods, and a structured discussion on the advantages and limitations of different kinds of MU techniques.
no code implementations • 28 Jul 2024 • Meng Jiang, Qing Zhao, Jianqiang Li, Fan Wang, Tianyu He, Xinyan Cheng, Bing Xiang Yang, Grace W. K. Ho, Guanghui Fu
Cognitive Behavioral Therapy (CBT) is a well-established intervention for mitigating psychological issues by modifying maladaptive cognitive and behavioral patterns.
1 code implementation • 25 Jun 2024 • Dong Liu, Meng Jiang
These methods address the limitations of static node representations and fixed aggregation schemes in traditional GNNs, offering a more nuanced approach to modeling complex and dynamic graph topologies.
1 code implementation • 25 Jun 2024 • Dong Liu, Roger Waleffe, Meng Jiang, Shivaram Venkataraman
In our recent research, we have developed a framework called GraphSnapShot, which has been proven an useful tool for graph learning acceleration.
1 code implementation • 17 Jun 2024 • Zhihan Zhang, Tao Ge, Zhenwen Liang, Wenhao Yu, Dian Yu, Mengzhao Jia, Dong Yu, Meng Jiang
Supervised fine-tuning enhances the problem-solving abilities of language models across various mathematical reasoning tasks.
1 code implementation • 17 Jun 2024 • Gang Liu, Srijit Seal, John Arevalo, Zhenwen Liang, Anne E. Carpenter, Meng Jiang, Shantanu Singh
A sufficiency objective decodes the representation to align with different feature spaces from the molecule's neighborhood in the context graph.
1 code implementation • 15 Jun 2024 • Zhaoxuan Tan, Zheyuan Liu, Meng Jiang
Personalized large language models (LLMs) aim to tailor interactions, content, and recommendations to individual user preferences.
no code implementations • 6 Jun 2024 • Ruiyang Qin, Dancheng Liu, Chenhui Xu, Zheyu Yan, Zhaoxuan Tan, Zhenge Jia, Amir Nassereldine, Jiajie Li, Meng Jiang, Ahmed Abbasi, JinJun Xiong, Yiyu Shi
For example, an optimal choice between parameter learning and RAG may vary depending on the difficulty of the downstream task, the longer fine-tuning time does not necessarily help the model, and a compressed LLM may be a better choice than an uncompressed LLM to learn from limited personalized data.
no code implementations • 23 May 2024 • Zhenyu Wu, Qingkai Zeng, Zhihan Zhang, Zhaoxuan Tan, Chao Shen, Meng Jiang
The condition can be an entity in an open-domain question or a numeric value in a math question, which requires minimal effort (via prompting) to identify.
no code implementations • 22 Apr 2024 • Mengzhao Jia, Zhihan Zhang, Wenhao Yu, Fangkai Jiao, Meng Jiang
Open-source multimodal large language models (MLLMs) excel in various tasks involving textual and visual inputs but still struggle with complex multimodal mathematical reasoning, lagging behind proprietary models like GPT-4V(ision) and Gemini-Pro.
1 code implementation • 17 Apr 2024 • Meng Jiang, Yi Jing Yu, Qing Zhao, Jianqiang Li, Changwei Song, Hongzhi Qi, Wei Zhai, Dan Luo, Xiaoqin Wang, Guanghui Fu, Bing Xiang Yang
Cognitive Behavioral Therapy (CBT) is an effective technique for addressing the irrational thoughts stemming from mental illnesses, but it necessitates precise identification of cognitive pathways to be successfully implemented in patient care.
1 code implementation • 19 Mar 2024 • Zhenyu Wu, Chao Shen, Meng Jiang
Lastly it instructs the LLMs with the verification on relevant and irrelevant conditions to avoid confusion and improve reasoning paths.
no code implementations • 18 Mar 2024 • Bang Nguyen, Mengxia Yu, Yun Huang, Meng Jiang
These criteria are not constrained to the syntactic or semantic of a single reference question, and the metric does not require a diverse set of references.
1 code implementation • 23 Feb 2024 • Meng Jiang, Keqin Bao, Jizhi Zhang, Wenjie Wang, Zhengyi Yang, Fuli Feng, Xiangnan He
Towards this goal, we develop a concise and effective framework called IFairLRS to enhance the item-side fairness of an LRS.
1 code implementation • 16 Feb 2024 • Yuxuan Kuang, Hai Lin, Meng Jiang
By leveraging the reasoning and generalizing abilities of foundation models, our method can understand free-form human instructions and perform effective open-set zero-shot navigation in diverse environments.
1 code implementation • 15 Feb 2024 • Zheyuan Liu, Guangyao Dou, Zhaoxuan Tan, Yijun Tian, Meng Jiang
To address this gap, we introduce Selective Knowledge negation Unlearning (SKU), a novel unlearning framework for LLMs, designed to eliminate harmful knowledge while preserving utility on normal prompts.
1 code implementation • 12 Feb 2024 • Qingkai Zeng, Yuyang Bai, Zhaoxuan Tan, Shangbin Feng, Zhenwen Liang, Zhihan Zhang, Meng Jiang
Automatic taxonomy induction is crucial for web search, recommendation systems, and question answering.
2 code implementations • 6 Feb 2024 • Zhaoxuan Tan, Qingkai Zeng, Yijun Tian, Zheyuan Liu, Bing Yin, Meng Jiang
OPPU integrates parametric user knowledge in the personal PEFT parameters with non-parametric knowledge from retrieval and profiles, adapting LLMs to user behavior shifts.
1 code implementation • 24 Jan 2024 • Gang Liu, Jiaxin Xu, Tengfei Luo, Meng Jiang
Inverse molecular design with diffusion models holds great potential for advancements in material and drug discovery.
1 code implementation • 10 Jan 2024 • Yue Huang, Lichao Sun, Haoran Wang, Siyuan Wu, Qihui Zhang, Yuan Li, Chujie Gao, Yixin Huang, Wenhan Lyu, Yixuan Zhang, Xiner Li, Zhengliang Liu, Yixin Liu, Yijue Wang, Zhikun Zhang, Bertie Vidgen, Bhavya Kailkhura, Caiming Xiong, Chaowei Xiao, Chunyuan Li, Eric Xing, Furong Huang, Hao liu, Heng Ji, Hongyi Wang, huan zhang, Huaxiu Yao, Manolis Kellis, Marinka Zitnik, Meng Jiang, Mohit Bansal, James Zou, Jian Pei, Jian Liu, Jianfeng Gao, Jiawei Han, Jieyu Zhao, Jiliang Tang, Jindong Wang, Joaquin Vanschoren, John Mitchell, Kai Shu, Kaidi Xu, Kai-Wei Chang, Lifang He, Lifu Huang, Michael Backes, Neil Zhenqiang Gong, Philip S. Yu, Pin-Yu Chen, Quanquan Gu, ran Xu, Rex Ying, Shuiwang Ji, Suman Jana, Tianlong Chen, Tianming Liu, Tianyi Zhou, William Wang, Xiang Li, Xiangliang Zhang, Xiao Wang, Xing Xie, Xun Chen, Xuyu Wang, Yan Liu, Yanfang Ye, Yinzhi Cao, Yong Chen, Yue Zhao
This paper introduces TrustLLM, a comprehensive study of trustworthiness in LLMs, including principles for different dimensions of trustworthiness, established benchmark, evaluation, and analysis of trustworthiness for mainstream LLMs, and discussion of open challenges and future directions.
1 code implementation • 11 Dec 2023 • Zhenyu Wu, Meng Jiang, Chao Shen
Given an initial answer from CoT, PRP iterates a verify-then-rectify process to progressively identify incorrect answers and rectify the reasoning paths.
1 code implementation • 11 Dec 2023 • Zhaoxuan Tan, Meng Jiang
Two common types of user data are text and graph, as the data usually contain a large amount of user-generated content (UGC) and online interactions.
1 code implementation • 5 Dec 2023 • Bowen Jin, Gang Liu, Chi Han, Meng Jiang, Heng Ji, Jiawei Han
Besides, although LLMs have shown their pure text-based reasoning ability, it is underexplored whether such ability can be generalized to graphs (i. e., graph-based reasoning).
no code implementations • 21 Nov 2023 • Ruiyang Qin, Jun Xia, Zhenge Jia, Meng Jiang, Ahmed Abbasi, Peipei Zhou, Jingtong Hu, Yiyu Shi
While it is possible to obtain annotation locally by directly asking users to provide preferred responses, such annotations have to be sparse to not affect user experience.
1 code implementation • 15 Nov 2023 • Zhihan Zhang, Dong-Ho Lee, Yuwei Fang, Wenhao Yu, Mengzhao Jia, Meng Jiang, Francesco Barbieri
Instruction tuning has remarkably advanced large language models (LLMs) in understanding and responding to diverse human instructions.
no code implementations • 30 Oct 2023 • Noah Ziems, Gang Liu, John Flanagan, Meng Jiang
Finally, we show LLM generated decision tree explanations correlate highly with human ratings of readability, quality, and use of background knowledge while simultaneously providing better understanding of decision boundaries.
no code implementations • 19 Oct 2023 • Zhihan Zhang, Shuohang Wang, Wenhao Yu, Yichong Xu, Dan Iter, Qingkai Zeng, Yang Liu, Chenguang Zhu, Meng Jiang
Large language models (LLMs) can perform a wide range of tasks by following natural language instructions, without the necessity of task-specific fine-tuning.
no code implementations • 15 Sep 2023 • Marinka Zitnik, Michelle M. Li, Aydin Wells, Kimberly Glass, Deisy Morselli Gysi, Arjun Krishnan, T. M. Murali, Predrag Radivojac, Sushmita Roy, Anaïs Baudot, Serdar Bozdag, Danny Z. Chen, Lenore Cowen, Kapil Devkota, Anthony Gitter, Sara Gosline, Pengfei Gu, Pietro H. Guzzi, Heng Huang, Meng Jiang, Ziynet Nesibe Kesimoglu, Mehmet Koyuturk, Jian Ma, Alexander R. Pico, Nataša Pržulj, Teresa M. Przytycka, Benjamin J. Raphael, Anna Ritz, Roded Sharan, Yang shen, Mona Singh, Donna K. Slonim, Hanghang Tong, Xinan Holly Yang, Byung-Jun Yoon, Haiyuan Yu, Tijana Milenković
Network biology is an interdisciplinary field bridging computational and biological sciences that has proved pivotal in advancing the understanding of cellular functions and diseases across biological systems and scales.
1 code implementation • 8 Sep 2023 • Eric Inae, Gang Liu, Meng Jiang
Attribute reconstruction is used to predict node or edge features in the pre-training of graph neural networks.
no code implementations • 8 Jul 2023 • Hy Dang, Bang Nguyen, Noah Ziems, Meng Jiang
Our paper investigates the use of discourse embedding techniques to develop a community recommendation system that focuses on mental health support groups on social media.
no code implementations • 27 Jun 2023 • Albert Lu, Meng Jiang
Review score prediction requires review text understanding, a critical real-world application of natural language processing.
no code implementations • 28 May 2023 • Meng Jiang, Hy Dang, Lingbo Tong
Language models (LMs) are being scaled and becoming powerful.
no code implementations • 23 May 2023 • Wenhao Yu, Zhihan Zhang, Zhenwen Liang, Meng Jiang, Ashish Sabharwal
ReFeed first generates initial outputs, then utilizes a retrieval model to acquire relevant information from large document collections, and finally incorporates the retrieved information into the in-context demonstration for output refinement, thereby addressing the limitations of LLMs in a more efficient and cost-effective manner.
no code implementations • 23 May 2023 • Wenhao Yu, Meng Jiang, Peter Clark, Ashish Sabharwal
Although counterfactual reasoning is a fundamental aspect of intelligence, the lack of large-scale counterfactual open-domain question-answering (QA) benchmarks makes it difficult to evaluate and improve models on this ability.
no code implementations • 23 May 2023 • Mengxia Yu, Zhihan Zhang, Wenhao Yu, Meng Jiang
Comparative reasoning is a process of comparing objects, concepts, or entities to draw conclusions, which constitutes a fundamental cognitive ability.
1 code implementation • 23 May 2023 • Zhihan Zhang, Wenhao Yu, Zheng Ning, Mingxuan Ju, Meng Jiang
Contrast consistency, the ability of a model to make consistently correct predictions in the presence of perturbations, is an essential aspect in NLP.
1 code implementation • 20 May 2023 • Gang Liu, Tong Zhao, Eric Inae, Tengfei Luo, Meng Jiang
The training data balance is achieved by (1) pseudo-labeling more graphs for under-represented labels with a novel regression confidence measurement and (2) augmenting graph examples in latent space for remaining rare labels after data balancing with pseudo-labels.
1 code implementation • 16 May 2023 • Noah Ziems, Wenhao Yu, Zhihan Zhang, Meng Jiang
To overcome this limitation, recent autoregressive search engines replace the dual-encoder architecture by directly generating identifiers for relevant documents in the candidate pool.
1 code implementation • 17 Mar 2023 • Gang Liu, Eric Inae, Tong Zhao, Jiaxin Xu, Tengfei Luo, Meng Jiang
A conventional approach is training a model with the unlabeled graphs on self-supervised tasks and then fine-tuning the model on the prediction tasks.
1 code implementation • 19 Dec 2022 • Meng Jiang
Text data mining is the process of deriving essential information from language text.
1 code implementation • 23 Oct 2022 • Wenhao Yu, Chenguang Zhu, Zhihan Zhang, Shuohang Wang, Zhuosheng Zhang, Yuwei Fang, Meng Jiang
However, applying such methods to commonsense reasoning tasks faces two unique challenges, i. e., the lack of a general large-scale corpus for retrieval and a corresponding effective commonsense retriever.
1 code implementation • 7 Oct 2022 • Zhihan Zhang, Wenhao Yu, Chenguang Zhu, Meng Jiang
The entity knowledge is stored in the memory as latent representations, and the memory is pre-trained on Wikipedia along with encoder-decoder parameters.
2 code implementations • 21 Sep 2022 • Wenhao Yu, Dan Iter, Shuohang Wang, Yichong Xu, Mingxuan Ju, Soumya Sanyal, Chenguang Zhu, Michael Zeng, Meng Jiang
We call our method generate-then-read (GenRead), which first prompts a large language model to generate contextutal documents based on a given question, and then reads the generated documents to produce the final answer.
no code implementations • 11 Aug 2022 • Zijian Hu, Meng Jiang
We originally planned to employ existing models but realized that they processed a math word problem as a sequence or a homogeneous graph of tokens.
no code implementations • 9 Jul 2022 • Gang Liu, Zhihan Zhang, Zheng Ning, Meng Jiang
To enable explainability, recent techniques such as ACCENT and FIA are looking for counterfactual explanations that are specific historical actions of a user, the removal of which leads to a change to the recommendation result.
1 code implementation • 21 Jun 2022 • Xiaojie Guo, Qingkai Zeng, Meng Jiang, Yun Xiao, Bo Long, Lingfei Wu
Automatic product description generation for e-commerce has witnessed significant advancement in the past decade.
1 code implementation • 6 Jun 2022 • Gang Liu, Tong Zhao, Jiaxin Xu, Tengfei Luo, Meng Jiang
Rationale is defined as a subset of input features that best explains or supports the prediction by machine learning models.
Ranked #1 on Graph Regression on GlassTemp
no code implementations • 29 Apr 2022 • Toby Jia-Jun Li, Yuwen Lu, Jaylexia Clark, Meng Chen, Victor Cox, Meng Jiang, Yang Yang, Tamara Kay, Danielle Wood, Jay Brockman
The AI inequality is caused by (1) the technology divide in who has access to AI technologies in gig work; and (2) the data divide in who owns the data in gig work leads to unfair working conditions, growing pay gap, neglect of workers' diverse preferences, and workers' lack of trust in the platforms.
no code implementations • 7 Apr 2022 • Zhihan Zhang, Wenhao Yu, Mengxia Yu, Zhichun Guo, Meng Jiang
Multi-task learning (MTL) has become increasingly popular in natural language processing (NLP) because it improves the performance of related tasks by exploiting their commonalities and differences.
1 code implementation • NAACL (DLG4NLP) 2022 • Wenhao Yu, Chenguang Zhu, Lianhui Qin, Zhihan Zhang, Tong Zhao, Meng Jiang
A set of knowledge experts seek diverse reasoning on KG to encourage various generation outputs.
1 code implementation • 17 Feb 2022 • Tong Zhao, Wei Jin, Yozen Liu, Yingheng Wang, Gang Liu, Stephan Günnemann, Neil Shah, Meng Jiang
Overall, our work aims to clarify the landscape of existing literature in graph data augmentation and motivates additional work in this area, providing a helpful resource for researchers and practitioners in the broader graph machine learning domain.
1 code implementation • Findings (ACL) 2022 • Wenhao Yu, Chenguang Zhu, Yuwei Fang, Donghan Yu, Shuohang Wang, Yichong Xu, Michael Zeng, Meng Jiang
In addition to training with the masked language modeling objective, we propose two novel self-supervised pre-training tasks on word and sentence-level alignment between input text sequence and rare word definitions to enhance language modeling representation with dictionary.
no code implementations • 4 Aug 2021 • Joseph Kuebler, Lingbo Tong, Meng Jiang
Information extraction (IE) in scientific literature has facilitated many down-stream tasks.
2 code implementations • 5 Jun 2021 • Qingkai Zeng, Jinfeng Lin, Wenhao Yu, Jane Cleland-Huang, Meng Jiang
Automatic construction of a taxonomy supports many applications in e-commerce, web search, and question answering.
1 code implementation • NeurIPS 2021 • Tong Zhao, Gang Liu, Daheng Wang, Wenhao Yu, Meng Jiang
However, the causal relationship between the two variables was largely ignored for learning to predict links on a graph.
Ranked #1 on Link Property Prediction on ogbl-ddi
no code implementations • 3 Jun 2021 • Meng Jiang
Graph neural networks have been widely used for learning representations of nodes for many downstream tasks on graph data.
no code implementations • 25 May 2021 • Shawn Gu, Meng Jiang, Pietro Hiram Guzzi, Tijana Milenkovic
Prediction of node and graph labels are prominent network science tasks.
1 code implementation • EMNLP 2021 • Wenhao Yu, Chenguang Zhu, Tong Zhao, Zhichun Guo, Meng Jiang
Generating paragraphs of diverse contents is important in many applications.
1 code implementation • 17 Feb 2021 • Daheng Wang, Prashant Shiralkar, Colin Lockard, Binxuan Huang, Xin Luna Dong, Meng Jiang
Existing work linearize table cells and heavily rely on modifying deep language models such as BERT which only captures related cells information in the same table.
1 code implementation • 16 Feb 2021 • Zhichun Guo, Chuxu Zhang, Wenhao Yu, John Herr, Olaf Wiest, Meng Jiang, Nitesh V. Chawla
The recent success of graph neural networks has significantly boosted molecular property prediction, advancing activities such as drug discovery.
Ranked #1 on Molecular Property Prediction (1-shot)) on Tox21
1 code implementation • 8 Feb 2021 • Jinfeng Lin, Yalin Liu, Qingkai Zeng, Meng Jiang, Jane Cleland-Huang
In this study, we propose a novel framework called Trace BERT (T-BERT) to generate trace links between source code and natural language artifacts.
Transfer Learning Software Engineering
no code implementations • EMNLP (Eval4NLP) 2021 • Qingkai Zeng, Mengxia Yu, Wenhao Yu, Tianwen Jiang, Meng Jiang
It can be used to validate the label consistency (or catches the inconsistency) in multiple sets of NER data annotation.
no code implementations • 1 Jan 2021 • Qing Lu, Weiwen Jiang, Meng Jiang, Jingtong Hu, Sakyasingha Dasgupta, Yiyu Shi
The success of gragh neural networks (GNNs) in the past years has aroused grow-ing interest and effort in designing best models to handle graph-structured data.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Qingkai Zeng, Wenhao Yu, Mengxia Yu, Tianwen Jiang, Tim Weninger, Meng Jiang
The training process of scientific NER models is commonly performed in two steps: i) Pre-training a language model by self-supervised tasks on huge data and ii) fine-tune training with small labelled data.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Chuxu Zhang, Lu Yu, Mandana Saebi, Meng Jiang, Nitesh Chawla
Multi-hop relation reasoning over knowledge base is to generate effective and interpretable relation prediction through reasoning paths.
1 code implementation • 20 Oct 2020 • Tong Zhao, Bo Ni, Wenhao Yu, Zhichun Guo, Neil Shah, Meng Jiang
With Eland, anomaly detection performance at an earlier stage is better than non-augmented methods that need significantly more observed data by up to 15% on the Area under the ROC curve.
1 code implementation • NAACL 2021 • Wenhao Yu, Lingfei Wu, Yu Deng, Qingkai Zeng, Ruchi Mahindru, Sinem Guven, Meng Jiang
In this paper, we propose a novel framework of deep transfer learning to effectively address technical QA across tasks and domains.
3 code implementations • 9 Oct 2020 • Wenhao Yu, Chenguang Zhu, Zaitang Li, Zhiting Hu, Qingyun Wang, Heng Ji, Meng Jiang
To address this issue, researchers have considered incorporating various forms of knowledge beyond the input text into the generation models.
1 code implementation • EMNLP 2020 • Wenhao Yu, Lingfei Wu, Yu Deng, Ruchi Mahindru, Qingkai Zeng, Sinem Guven, Meng Jiang
In recent years, the need for community technical question-answering sites has increased significantly.
2 code implementations • EMNLP 2021 • Xiangyu Dong, Wenhao Yu, Chenguang Zhu, Meng Jiang
Our model has a multi-step decoder that injects the entity types into the process of entity mention generation.
no code implementations • 15 Sep 2020 • Meng Jiang, Taeho Jung, Ryan Karl, Tong Zhao
Given video data from multiple personal devices or street cameras, can we exploit the structural and dynamic information to learn dynamic representation of objects for applications such as distributed surveillance, without storing data at a central server that leads to a violation of user privacy?
1 code implementation • 25 Jul 2020 • Daheng Wang, Zhihan Zhang, Yihong Ma, Tong Zhao, Tianwen Jiang, Nitesh V. Chawla, Meng Jiang
In this work, we present a novel framework called CoEvoGNN for modeling dynamic attributed graph sequence.
no code implementations • 16 Jul 2020 • Zhiyu Liu, Meng Jiang, Hai Lin
For knowledge representation, we use a graph-based spatial temporal logic (GSTL) to capture spatial and temporal information of related skills demonstrated by demo videos.
no code implementations • 17 Jun 2020 • Tianwen Jiang, Tong Zhao, Bing Qin, Ting Liu, Nitesh V. Chawla, Meng Jiang
Noun phrases and relational phrases in Open Knowledge Bases are often not canonical, leading to redundant and ambiguous facts.
2 code implementations • 11 Jun 2020 • Tong Zhao, Yozen Liu, Leonardo Neves, Oliver Woodford, Meng Jiang, Neil Shah
Our work shows that neural edge predictors can effectively encode class-homophilic structure to promote intra-class edges and demote inter-class edges in given graph structure, and our main contribution introduces the GAug graph data augmentation framework, which leverages these insights to improve performance in GNN-based node classification via edge prediction.
Ranked #1 on Node Classification on Flickr
1 code implementation • 11 Jun 2020 • Daheng Wang, Meng Jiang, Munira Syed, Oliver Conway, Vishal Juneja, Sriram Subramanian, Nitesh V. Chawla
The user embeddings preserve spatial patterns and temporal patterns of a variety of periodicity (e. g., hourly, weekly, and weekday patterns).
no code implementations • WS 2020 • Yang Zhou, Tong Zhao, Meng Jiang
Textual patterns (e. g., Country's president Person) are specified and/or generated for extracting factual information from unstructured data.
no code implementations • ACL 2020 • Wenhao Yu, Lingfei Wu, Qingkai Zeng, Shu Tao, Yu Deng, Meng Jiang
Existing methods learned semantic representations with dual encoders or dual variational auto-encoders.
no code implementations • NAACL 2021 • Chenguang Zhu, William Hinthorn, Ruochen Xu, Qingkai Zeng, Michael Zeng, Xuedong Huang, Meng Jiang
Automatic abstractive summaries are found to often distort or fabricate facts in the article.
no code implementations • 12 Mar 2020 • Mandana Saebi, Steven Krieg, Chuxu Zhang, Meng Jiang, Nitesh Chawla
Path-based relational reasoning over knowledge graphs has become increasingly popular due to a variety of downstream applications such as question answering in dialogue systems, fact prediction, and recommender systems.
1 code implementation • 28 Jan 2020 • Bo Ni, Zhichun Guo, Jianing Li, Meng Jiang
Recently, due to the booming influence of online social networks, detecting fake news is drawing significant attention from both academic communities and general public.
1 code implementation • 26 Nov 2019 • Chuxu Zhang, Huaxiu Yao, Chao Huang, Meng Jiang, Zhenhui Li, Nitesh V. Chawla
Knowledge graphs (KGs) serve as useful resources for various natural language processing applications.
no code implementations • WS 2019 • Qingkai Zeng, Mengxia Yu, Wenhao Yu, JinJun Xiong, Yiyu Shi, Meng Jiang
On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts.
no code implementations • IJCNLP 2019 • Tianwen Jiang, Tong Zhao, Bing Qin, Ting Liu, Nitesh Chawla, Meng Jiang
In this work, we propose a new sequence labeling framework (as well as a new tag schema) to jointly extract the fact and condition tuples from statement sentences.
1 code implementation • 7 Oct 2019 • Huaxiu Yao, Chuxu Zhang, Ying WEI, Meng Jiang, Suhang Wang, Junzhou Huang, Nitesh V. Chawla, Zhenhui Li
Towards the challenging problem of semi-supervised node classification, there have been extensive studies.
no code implementations • 15 Sep 2019 • Tianchen Wang, JinJun Xiong, Xiaowei Xu, Meng Jiang, Yiyu Shi, Haiyun Yuan, Meiping Huang, Jian Zhuang
Cardiac magnetic resonance imaging (MRI) is an essential tool for MRI-guided surgery and real-time intervention.
no code implementations • 26 Jun 2019 • Tianwen Jiang, Tong Zhao, Bing Qin, Ting Liu, Nitesh V. Chawla, Meng Jiang
Conditions are essential in the statements of biological literature.
2 code implementations • 22 Dec 2018 • Chao Zhang, Fangbo Tao, Xiusi Chen, Jiaming Shen, Meng Jiang, Brian Sadler, Michelle Vanni, Jiawei Han
Our method, TaxoGen, uses term embeddings and hierarchical clustering to construct a topic taxonomy in a recursive fashion.
Databases
no code implementations • 26 Feb 2018 • Jinglan Liu, Jiaxin Zhang, Yukun Ding, Xiaowei Xu, Meng Jiang, Yiyu Shi
This work explores the binarization of the deconvolution-based generator in a GAN for memory saving and speedup of image construction.
no code implementations • 13 Mar 2017 • Meng Jiang, Jingbo Shang, Taylor Cassidy, Xiang Ren, Lance M. Kaplan, Timothy P. Hanratty, Jiawei Han
We propose an efficient framework, called MetaPAD, which discovers meta patterns from massive corpora with three techniques: (1) it develops a context-aware segmentation method to carefully determine the boundaries of patterns with a learnt pattern quality assessment function, which avoids costly dependency parsing and generates high-quality patterns; (2) it identifies and groups synonymous meta patterns from multiple facets---their types, contexts, and extractions; and (3) it examines type distributions of entities in the instances extracted by each group of patterns, and looks for appropriate type levels to make discovered patterns precise.
4 code implementations • 15 Feb 2017 • Jingbo Shang, Jialu Liu, Meng Jiang, Xiang Ren, Clare R. Voss, Jiawei Han
As one of the fundamental tasks in text analysis, phrase mining aims at extracting quality phrases from a text corpus.
no code implementations • 31 Oct 2016 • Jingbo Shang, Meng Jiang, Wenzhu Tong, Jinfeng Xiao, Jian Peng, Jiawei Han
In the literature, two series of models have been proposed to address prediction problems including classification and regression.