1 code implementation • 18 Nov 2023 • Haonan Yuan, Qingyun Sun, Xingcheng Fu, Ziwei Zhang, Cheng Ji, Hao Peng, JianXin Li
To the best of our knowledge, we are the first to study OOD generalization on dynamic graphs from the environment learning perspective.
no code implementations • 16 Nov 2023 • Yanai Elazar, Bhargavi Paranjape, Hao Peng, Sarah Wiegreffe, Khyathi Raghavi, Vivek Srikumar, Sameer Singh, Noah A. Smith
Previous work has found that datasets with paired inputs are prone to correlations between a specific part of the input (e. g., the hypothesis in NLI) and the label; consequently, models trained only on those outperform chance.
no code implementations • 16 Nov 2023 • Genglin Liu, Xingyao Wang, Lifan Yuan, Yangyi Chen, Hao Peng
When presented with such unanswerable questions, an LLM should appropriately convey uncertainty, and be able to challenge the premise and refuse to generate a response.
no code implementations • 15 Nov 2023 • Xiaozhi Wang, Hao Peng, Yong Guan, Kaisheng Zeng, Jianhui Chen, Lei Hou, Xu Han, Yankai Lin, Zhiyuan Liu, Ruobing Xie, Jie zhou, Juanzi Li
Understanding events in texts is a core objective of natural language understanding, which requires detecting event occurrences, extracting event arguments, and analyzing inter-event relationships.
no code implementations • 15 Nov 2023 • Hao Peng, Xiaozhi Wang, Jianhui Chen, Weikai Li, Yunjia Qi, Zimu Wang, Zhili Wu, Kaisheng Zeng, Bin Xu, Lei Hou, Juanzi Li
In this paper, we find that ICL falls short of handling specification-heavy tasks, which are tasks with complicated and extensive task specifications, requiring several hours for ordinary humans to master, such as traditional information extraction tasks.
1 code implementation • 8 Nov 2023 • Xusheng Zhao, Hao Peng, Qiong Dai, Xu Bai, Huailiang Peng, Yanbing Liu, Qinglang Guo, Philip S. Yu
Aspect-based sentiment analysis (ABSA) is dedicated to forecasting the sentiment polarity of aspect terms within sentences.
1 code implementation • 7 Nov 2023 • Zhongfen Deng, Hao Peng, Tao Zhang, Shuaiqi Liu, Wenting Zhao, Yibo Wang, Philip S. Yu
Furthermore, the copy mechanism in value generator and the value attention module in value classifier help our model address the data discrepancy issue by only focusing on the relevant part of input text and ignoring other information which causes the discrepancy issue such as sentence structure in the text.
1 code implementation • 6 Nov 2023 • Dongcheng Zou, Senzhang Wang, Xuefeng Li, Hao Peng, Yuandong Wang, Chunyang Liu, Kehua Sheng, Bo Zhang
Based on this, we propose a relative structural entropy-based position encoding and a multi-head attention masking scheme based on multi-layer encoding trees.
1 code implementation • 30 Oct 2023 • Jiaqian Ren, Hao Peng, Lei Jiang, Zhiwei Liu, Jia Wu, Zhengtao Yu, Philip S. Yu
While in our observation, compared to the rarity of classes, the calibrated uncertainty estimated from well-trained evidential deep learning networks better reflects model performance.
1 code implementation • 23 Oct 2023 • Jian Guan, Jesse Dodge, David Wadden, Minlie Huang, Hao Peng
Recent progress in natural language processing (NLP) owes much to remarkable advances in large language models (LLMs).
1 code implementation • 20 Oct 2023 • Xiaolong Liu, Liangwei Yang, Zhiwei Liu, Mingdai Yang, Chen Wang, Hao Peng, Philip S. Yu
Collectively, our contributions signify a substantial stride towards augmenting the panorama of recommendation diversity within the realm of KG-informed RecSys paradigms.
1 code implementation • 20 Oct 2023 • Mingdai Yang, Zhiwei Liu, Liangwei Yang, Xiaolong Liu, Chen Wang, Hao Peng, Philip S. Yu
On the other hand, pretraining and finetuning on the same dataset leads to a high risk of overfitting.
no code implementations • 17 Oct 2023 • Xusheng Zhao, Hao liu, Qiong Dai, Hao Peng, Xu Bai, Huailiang Peng
We showcase the effectiveness of MSGT-SL on real-world SL tasks, demonstrating the empirical benefits gained from the graph transformer and multi-omics data.
1 code implementation • 15 Oct 2023 • Tianxiao Shen, Hao Peng, Ruoqi Shen, Yao Fu, Zaid Harchaoui, Yejin Choi
Language models have become the backbone of today's AI systems.
1 code implementation • 8 Oct 2023 • Zhiqin Yang, Yonggang Zhang, Yu Zheng, Xinmei Tian, Hao Peng, Tongliang Liu, Bo Han
Comprehensive experiments demonstrate the efficacy of FedFed in promoting model performance.
1 code implementation • 5 Oct 2023 • Tom Sherborne, Naomi Saphra, Pradeep Dasigi, Hao Peng
We find that TRAM outperforms both sharpness-aware and trust region-based optimization methods on cross-domain language modeling and cross-lingual transfer, where robustness to domain transfer and representation generality are critical for success.
1 code implementation • 29 Sep 2023 • Lifan Yuan, Yangyi Chen, Xingyao Wang, Yi R. Fung, Hao Peng, Heng Ji
It creates toolsets specifically curated for the tasks and equips LLMs with a component that retrieves tools from these sets to enhance their capability to solve complex tasks.
1 code implementation • 25 Sep 2023 • Hao Peng, Xiaozhi Wang, Feng Yao, Zimu Wang, Chuzhao Zhu, Kaisheng Zeng, Lei Hou, Juanzi Li
Event understanding aims at understanding the content and relationship of events within texts, which covers multiple complicated information extraction tasks: event detection, event argument extraction, and event relation extraction.
no code implementations • 19 Sep 2023 • Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji
However, current evaluation protocols often emphasize benchmark performance with single-turn exchanges, neglecting the nuanced interactions among the user, LLMs, and external tools, while also underestimating the importance of natural language feedback from users.
1 code implementation • 5 Sep 2023 • Guangjie Zeng, Hao Peng, Angsheng Li, Zhiwei Liu, Chunyang Liu, Philip S. Yu, Lifang He
In this work, we propose a novel unsupervised Skin Lesion sEgmentation framework based on structural entropy and isolation forest outlier Detection, namely SLED.
no code implementations • 16 Aug 2023 • Mingdai Yang, Zhiwei Liu, Liangwei Yang, Xiaolong Liu, Chen Wang, Hao Peng, Philip S. Yu
With the proliferation of social media, a growing number of users search for and join group activities in their daily life.
no code implementations • 19 Jul 2023 • Hao Peng, Qingqing Cao, Jesse Dodge, Matthew E. Peters, Jared Fernandez, Tom Sherborne, Kyle Lo, Sam Skjonsberg, Emma Strubell, Darrell Plessas, Iz Beltagy, Evan Pete Walsh, Noah A. Smith, Hannaneh Hajishirzi
In response, we introduce Pentathlon, a benchmark for holistic and realistic evaluation of model efficiency.
1 code implementation • 26 Jun 2023 • Yuwei Cao, Liangwei Yang, Chen Wang, Zhiwei Liu, Hao Peng, Chenyu You, Philip S. Yu
We explore the role of the fine-grained item attributes in bridging the gaps between the existing and the SCS items and pre-train a knowledgeable item-attribute graph for SCS item recommendation.
no code implementations • 21 Jun 2023 • Ziwei Fan, Zhiwei Liu, Hao Peng, Philip S. Yu
We also establish a correlation between the ranks of sequence and item embeddings and the rank of the user-item preference prediction matrix, which can affect recommendation diversity.
1 code implementation • 15 Jun 2023 • Jifan Yu, Xiaozhi Wang, Shangqing Tu, Shulin Cao, Daniel Zhang-li, Xin Lv, Hao Peng, Zijun Yao, Xiaohan Zhang, Hanming Li, Chunyang Li, Zheyuan Zhang, Yushi Bai, Yantao Liu, Amy Xin, Nianyi Lin, Kaifeng Yun, Linlu Gong, Jianhui Chen, Zhili Wu, Yunjia Qi, Weikai Li, Yong Guan, Kaisheng Zeng, Ji Qi, Hailong Jin, Jinxin Liu, Yu Gu, Yuan YAO, Ning Ding, Lei Hou, Zhiyuan Liu, Bin Xu, Jie Tang, Juanzi Li
The unprecedented performance of large language models (LLMs) necessitates improvements in evaluations.
1 code implementation • 12 Jun 2023 • Hao Peng, Xiaozhi Wang, Feng Yao, Kaisheng Zeng, Lei Hou, Juanzi Li, Zhiyuan Liu, Weixing Shen
In this paper, we check the reliability of EE evaluations and identify three major pitfalls: (1) The data preprocessing discrepancy makes the evaluation results on the same dataset not directly comparable, but the data preprocessing details are not widely noted and specified in papers.
1 code implementation • 26 May 2023 • Yao Fu, Litu Ou, Mingyu Chen, Yuhao Wan, Hao Peng, Tushar Khot
As large language models (LLMs) are continuously being developed, their evaluation becomes increasingly important yet challenging.
1 code implementation • 17 May 2023 • Yao Fu, Hao Peng, Tushar Khot, Mirella Lapata
We study whether multiple large language models (LLMs) can autonomously improve each other in a negotiation game by playing, reflecting, and criticizing.
1 code implementation • 17 May 2023 • Xingyao Wang, Hao Peng, Reyhaneh Jabbarvand, Heng Ji
LeTI iteratively fine-tunes the model, using the LM objective, on a concatenation of natural language instructions, LM-generated programs, and textual feedback, which is only provided when the generated program fails to solve the task.
no code implementations • 5 May 2023 • Li Sun, Feiyang Wang, Junda Ye, Hao Peng, Philip S. Yu
On the other hand, contrastive learning boosts the deep graph clustering but usually struggles in either graph augmentation or hard sample mining.
1 code implementation • 24 Apr 2023 • Xianghua Zeng, Hao Peng, Angsheng Li, Chunyang Liu, Lifang He, Philip S. Yu
State abstraction optimizes decision-making by ignoring irrelevant environmental information in reinforcement learning with rich observations.
1 code implementation • 11 Apr 2023 • Xingcheng Fu, Yuecen Wei, Qingyun Sun, Haonan Yuan, Jia Wu, Hao Peng, JianXin Li
We find that training labeled nodes with different hierarchical properties have a significant impact on the node classification tasks and confirm it in our experiments.
1 code implementation • 6 Apr 2023 • Ziwei Fan, Ke Xu, Zhang Dong, Hao Peng, Jiawei Zhang, Philip S. Yu
Moreover, we show that the inclusion of user-user and item-item correlations can improve recommendations for users with both abundant and insufficient interactions.
1 code implementation • 3 Apr 2023 • Xianghua Zeng, Hao Peng, Angsheng Li
Role-based learning is a promising approach to improving the performance of Multi-Agent Reinforcement Learning (MARL).
2 code implementations • 17 Mar 2023 • Dongcheng Zou, Hao Peng, Xiang Huang, Renyu Yang, JianXin Li, Jia Wu, Chunyang Liu, Philip S. Yu
Graph Neural Networks (GNNs) are de facto solutions to structural data learning.
1 code implementation • 10 Mar 2023 • Yingguang Yang, Renyu Yang, Hao Peng, Yangyang Li, Tong Li, Yong Liao, Pengyuan Zhou
In particular, a global generator is used to extract the knowledge of global data distribution and distill it into each client's local model.
1 code implementation • 2 Mar 2023 • Yuhu Shang, Xuexiong Luo, Lihong Wang, Hao Peng, Xiankun Zhang, Yimeng Ren, Kun Liang
To reduce the repetitive and complex work of instructors, exam paper generation (EPG) technique has become a salient topic in the intelligent education field, which targets at generating high-quality exam paper automatically according to instructor-specified assessment criteria.
no code implementations • 18 Feb 2023 • Ce Zhou, Qian Li, Chen Li, Jun Yu, Yixin Liu, Guangjing Wang, Kai Zhang, Cheng Ji, Qiben Yan, Lifang He, Hao Peng, JianXin Li, Jia Wu, Ziwei Liu, Pengtao Xie, Caiming Xiong, Jian Pei, Philip S. Yu, Lichao Sun
This study provides a comprehensive review of recent research advancements, challenges, and opportunities for PFMs in text, image, graph, as well as other data modalities.
no code implementations • 10 Feb 2023 • Lingfeng Zhong, Jia Wu, Qian Li, Hao Peng, Xindong Wu
A knowledge graph is built in three steps: knowledge acquisition, knowledge refinement, and knowledge evolution.
2 code implementations • 30 Jan 2023 • Yao Fu, Hao Peng, Litu Ou, Ashish Sabharwal, Tushar Khot
by paying the price of decreased generic ability, we can clearly lift up the scaling curve of models smaller than 10B towards a specialized multi-step math reasoning ability.
1 code implementation • 28 Jan 2023 • Cheng Ji, JianXin Li, Hao Peng, Jia Wu, Xingcheng Fu, Qingyun Sun, Phillip S. Yu
Contrastive Learning (CL) has been proved to be a powerful self-supervised approach for a wide range of domains, including computer vision and graph representation learning.
1 code implementation • 28 Jan 2023 • Ziwei Fan, Zhiwei Liu, Hao Peng, Philip S Yu
Wasserstein Discrepancy Measurement builds upon the 2-Wasserstein distance, which is more robust, more efficient in small batch sizes, and able to model the uncertainty of stochastic augmentation processes.
no code implementations • 18 Jan 2023 • Guojie Tang, Wenchao Xue, Hao Peng, Yanlong Zhao, Zhijun Yang
In particular, the algorithm for calculating the tracking error caused by single ESO's estimation error is constructed.
no code implementations • 14 Jan 2023 • Zhenyu Yang, Ge Zhang, Jia Wu, Jian Yang, Quan Z. Sheng, Shan Xue, Chuan Zhou, Charu Aggarwal, Hao Peng, Wenbin Hu, Edwin Hancock, Pietro Liò
Traditional approaches to learning a set of graphs heavily rely on hand-crafted features, such as substructures.
no code implementations • 30 Dec 2022 • Qingyun Sun, JianXin Li, Beining Yang, Xingcheng Fu, Hao Peng, Philip S. Yu
Most Graph Neural Networks follow the message-passing paradigm, assuming the observed structure depicts the ground-truth node relationships.
no code implementations • 30 Nov 2022 • Li Sun, Junda Ye, Hao Peng, Feiyang Wang, Philip S. Yu
On the one hand, existing methods work with the zero-curvature Euclidean space, and largely ignore the fact that curvature varies over the coming graph sequence.
1 code implementation • 14 Nov 2022 • Xiaozhi Wang, Yulin Chen, Ning Ding, Hao Peng, Zimu Wang, Yankai Lin, Xu Han, Lei Hou, Juanzi Li, Zhiyuan Liu, Peng Li, Jie zhou
It contains 103, 193 event coreference chains, 1, 216, 217 temporal relations, 57, 992 causal relations, and 15, 841 subevent relations, which is larger than existing datasets of all the ERE tasks by at least an order of magnitude.
1 code implementation • 8 Nov 2022 • Hao Peng, Xiaozhi Wang, Shengding Hu, Hailong Jin, Lei Hou, Juanzi Li, Zhiyuan Liu, Qun Liu
We believe this is a critical bottleneck for realizing human-like cognition in PLMs.
1 code implementation • 7 Nov 2022 • Michael Hassid, Hao Peng, Daniel Rotem, Jungo Kasai, Ivan Montero, Noah A. Smith, Roy Schwartz
Our results motivate research on simpler alternatives to input-dependent attention, as well as on methods for better utilization of this mechanism in the Transformer architecture.
1 code implementation • 2 Nov 2022 • Mingdai Yang, Zhiwei Liu, Liangwei Yang, Xiaolong Liu, Chen Wang, Hao Peng, Philip S. Yu
PA layers efficiently learn the relatedness of non-neighbor nodes to improve the information propagation to users.
1 code implementation • 24 Oct 2022 • Ziwei Fan, Zhiwei Liu, Chen Wang, Peijie Huang, Hao Peng, Philip S. Yu
However, it remains a significant challenge to model auxiliary item relationships in SR. To simultaneously model high-order item-item transitions in sequences and auxiliary item relationships, we propose a Multi-relational Transformer capable of modeling auxiliary item relationships for SR (MT4SR).
1 code implementation • 18 Oct 2022 • Fanzhen Liu, Xiaoxiao Ma, Jia Wu, Jian Yang, Shan Xue, Amin Beheshti, Chuan Zhou, Hao Peng, Quan Z. Sheng, Charu C. Aggarwal
To bridge the gaps, this paper devises a novel Data Augmentation-based Graph Anomaly Detection (DAGAD) framework for attributed graphs, equipped with three specially designed modules: 1) an information fusion module employing graph neural network encoders to learn representations, 2) a graph data augmentation module that fertilizes the training set with generated samples, and 3) an imbalance-tailored learning module to discriminate the distributions of the minority (anomalous) and majority (normal) classes.
1 code implementation • 16 Oct 2022 • Zhaofeng Wu, Hao Peng, Nikolaos Pappas, Noah A. Smith
Document-level machine translation leverages inter-sentence dependencies to produce more coherent and consistent translations.
1 code implementation • 14 Oct 2022 • Zhaofeng Wu, William Merrill, Hao Peng, Iz Beltagy, Noah A. Smith
Many current NLP systems are built from language models trained to optimize unsupervised objectives on large amounts of raw text.
no code implementations • 3 Oct 2022 • Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, Tushar Khot
In this work, we propose complexity-based prompting, a simple and effective example selection scheme for multi-step reasoning.
1 code implementation • 2 Oct 2022 • Yuecen Wei, Xingcheng Fu, Qingyun Sun, Hao Peng, Jia Wu, Jinyan Wang, Xianxian Li
To address this issue, we propose a novel heterogeneous graph neural network privacy-preserving method based on a differential privacy mechanism named HeteDP, which provides a double guarantee on graph features and topology.
1 code implementation • 27 Sep 2022 • Hong Liu, Hao Peng, Zhijian Ou, Juanzi Li, Yi Huang, Junlan Feng
Recently, there have merged a class of task-oriented dialogue (TOD) datasets collected through Wizard-of-Oz simulated games.
1 code implementation • 4 Sep 2022 • Jiaqian Ren, Lei Jiang, Hao Peng, Lingjuan Lyu, Zhiwei Liu, Chaochao Chen, Jia Wu, Xu Bai, Philip S. Yu
Integrating multiple online social networks (OSNs) has important implications for many downstream social mining tasks, such as user preference modelling, recommendation, and link prediction.
no code implementations • 30 Aug 2022 • Li Sun, Junda Ye, Hao Peng, Philip S. Yu
To bridge this gap, we make the first attempt to study the problem of self-supervised temporal graph representation learning in the general Riemannian space, supporting the time-varying curvature to shift among hyperspherical, Euclidean and hyperbolic spaces.
1 code implementation • 17 Aug 2022 • Qingyun Sun, JianXin Li, Haonan Yuan, Xingcheng Fu, Hao Peng, Cheng Ji, Qian Li, Philip S. Yu
Topology-imbalance is a graph-specific imbalance problem caused by the uneven topology positions of labeled nodes, which significantly damages the performance of GNNs.
2 code implementations • 9 Aug 2022 • Ruitong Zhang, Hao Peng, Yingtong Dou, Jia Wu, Qingyun Sun, Jingyi Zhang, Philip S. Yu
DBSCAN is widely used in many scientific and engineering fields because of its simplicity and practicality.
1 code implementation • 6 Jul 2022 • Zhijian Ou, Junlan Feng, Juanzi Li, Yakun Li, Hong Liu, Hao Peng, Yi Huang, Jiangjiang Zhao
A challenge on Semi-Supervised and Reinforced Task-Oriented Dialog Systems, Co-located with EMNLP2022 SereTOD Workshop.
2 code implementations • 21 Jun 2022 • Kay Liu, Yingtong Dou, Yue Zhao, Xueying Ding, Xiyang Hu, Ruitong Zhang, Kaize Ding, Canyu Chen, Hao Peng, Kai Shu, Lichao Sun, Jundong Li, George H. Chen, Zhihao Jia, Philip S. Yu
To bridge this gap, we present--to the best of our knowledge--the first comprehensive benchmark for unsupervised outlier node detection on static attributed graphs called BOND, with the following highlights.
no code implementations • 31 May 2022 • Ge Zhang, Jia Wu, Jian Yang, Shan Xue, Wenbin Hu, Chuan Zhou, Hao Peng, Quan Z. Sheng, Charu Aggarwal
To frame this survey, we propose a systematic taxonomy covering GLNNs upon deep neural networks, graph neural networks, and graph pooling.
no code implementations • 24 May 2022 • Jiaqian Ren, Lei Jiang, Hao Peng, Zhiwei Liu, Jia Wu, Philip S. Yu
To incorporate temporal information into the message passing scheme, we introduce a novel temporal-aware aggregator which assigns weights to neighbours according to an adaptive time exponential decay formula.
1 code implementation • 19 May 2022 • Jungo Kasai, Keisuke Sakaguchi, Ronan Le Bras, Hao Peng, Ximing Lu, Dragomir Radev, Yejin Choi, Noah A. Smith
Our extensive evaluations on machine translation and scientific paper summarization demonstrate that Twist decoding substantially outperforms each model decoded in isolation over various scenarios, including cases where domain-specific and general-purpose models are both available.
1 code implementation • Findings (NAACL) 2022 • Yuwei Cao, William Groves, Tanay Kumar Saha, Joel R. Tetreault, Alex Jaimes, Hao Peng, Philip S. Yu
To date, work in this area has mostly focused on English as there is a scarcity of labeled data for other languages.
1 code implementation • 26 Apr 2022 • Kay Liu, Yingtong Dou, Yue Zhao, Xueying Ding, Xiyang Hu, Ruitong Zhang, Kaize Ding, Canyu Chen, Hao Peng, Kai Shu, George H. Chen, Zhihao Jia, Philip S. Yu
PyGOD is an open-source Python library for detecting outliers on graph data.
no code implementations • 18 Mar 2022 • Xusheng Zhao, Jia Wu, Hao Peng, Amin Beheshti, Jessica J. M. Monaghan, David Mcalpine, Heivet Hernandez-Perez, Mark Dras, Qiong Dai, Yangyang Li, Philip S. Yu, Lifang He
Modern neuroimaging techniques, such as diffusion tensor imaging (DTI) and functional magnetic resonance imaging (fMRI), enable us to model the human brain as a brain network or connectome.
1 code implementation • 3 Mar 2022 • JianXin Li, Xingcheng Fu, Qingyun Sun, Cheng Ji, Jiajun Tan, Jia Wu, Hao Peng
In this paper, we proposed a novel Curvature Graph Generative Adversarial Networks method, named \textbf{\modelname}, which is the first GAN-based graph representation method in the Riemannian geometric manifold.
1 code implementation • 17 Jan 2022 • Yixin Liu, Yu Zheng, Daokun Zhang, Hongxu Chen, Hao Peng, Shirui Pan
To solve the unsupervised GSL problem, we propose a novel StrUcture Bootstrapping contrastive LearnIng fraMEwork (SUBLIME for abbreviation) with the aid of self-supervised contrastive learning.
1 code implementation • 16 Jan 2022 • Ziwei Fan, Zhiwei Liu, Alice Wang, Zahra Nazari, Lei Zheng, Hao Peng, Philip S. Yu
We further argue that BPR loss has no constraint on positive and sampled negative items, which misleads the optimization.
1 code implementation • 21 Dec 2021 • Hao Peng, Hang Li, Lei Hou, Juanzi Li, chao qiao
We also develop a dataset for the problem using an existing MKB.
1 code implementation • 16 Dec 2021 • Qingyun Sun, JianXin Li, Hao Peng, Jia Wu, Xingcheng Fu, Cheng Ji, Philip S. Yu
Graph Neural Networks (GNNs) have shown promising results on a broad spectrum of applications.
no code implementations • 10 Dec 2021 • Li Sun, Zhongbao Zhang, Junda Ye, Hao Peng, Jiawei Zhang, Sen Su, Philip S. Yu
Instead of working on one single constant-curvature space, we construct a mixed-curvature space via the Cartesian product of multiple Riemannian component spaces and design hierarchical attention mechanisms for learning and fusing the representations across these component spaces.
1 code implementation • TIST 2021 2021 • Haoyi Zhou, Hao Peng, Jieqi Peng, Shuai Zhang, JianXin Li
Extensive experiments are conducted on five large-scale datasets, which demonstrate that our method achieves state-of-the-art performance and validates the effectiveness brought by local structure information.
no code implementations • 28 Nov 2021 • Xiaohan Li, Zhiwei Liu, Stephen Guo, Zheng Liu, Hao Peng, Philip S. Yu, Kannan Achan
In this paper, we propose a novel Reinforced Attentive Multi-relational Graph Neural Network (RAM-GNN) to the pre-train user and item embeddings on the user and item graph prior to the recommendation step.
no code implementations • 21 Nov 2021 • Zhiwei Liu, Liangwei Yang, Ziwei Fan, Hao Peng, Philip S. Yu
However, they all require centralized storage of the social links and item interactions of users, which leads to privacy concerns.
no code implementations • 21 Nov 2021 • Jun Yu, Zhaoming Kong, Aditya Kendre, Hao Peng, Carl Yang, Lichao Sun, Alex Leow, Lifang He
This paper presents a novel graph-based kernel learning approach for connectome analysis.
no code implementations • 20 Nov 2021 • Yizhen Zheng, Ming Jin, Shirui Pan, Yuan-Fang Li, Hao Peng, Ming Li, Zhao Li
To overcome the aforementioned problems, we introduce a novel self-supervised graph representation learning algorithm via Graph Contrastive Adjusted Zooming, namely G-Zoom, to learn node representations by leveraging the proposed adjusted zooming scheme.
1 code implementation • 15 Oct 2021 • Xingcheng Fu, JianXin Li, Jia Wu, Qingyun Sun, Cheng Ji, Senzhang Wang, Jiajun Tan, Hao Peng, Philip S. Yu
Hyperbolic Graph Neural Networks(HGNNs) extend GNNs to hyperbolic space and thus are more effective to capture the hierarchical structures of graphs in node representation learning.
no code implementations • 10 Oct 2021 • Hao Peng, Guofeng Tong, Zheng Li, Yaqi Wang, Yuyuan Shao
The SGNet proposed in this paper has achieved state-of-the-art results for 3D object detection in the KITTI dataset, especially in the detection of small-size objects such as cyclists.
no code implementations • ACL 2022 • Hao Peng, Jungo Kasai, Nikolaos Pappas, Dani Yogatama, Zhaofeng Wu, Lingpeng Kong, Roy Schwartz, Noah A. Smith
One way to improve the efficiency is to bound the memory size.
no code implementations • 2 Sep 2021 • Hongyin Zhu, Hao Peng, Zhiheng Lyu, Lei Hou, Juanzi Li, Jinghui Xiao
To obtain the aforementioned multi-format text, we construct a corpus in the tourism domain and conduct experiments on 5 tourism NLP datasets.
no code implementations • 23 Aug 2021 • Qian Li, Shu Guo, Jia Wu, JianXin Li, Jiawei Sheng, Lihong Wang, Xiaohan Dong, Hao Peng
It ignores meaningful associations among event types and argument roles, leading to relatively poor performance for less frequent types/roles.
1 code implementation • 6 Aug 2021 • Jiaqian Ren, Hao Peng, Lei Jiang, Jia Wu, Yongxin Tong, Lihong Wang, Xu Bai, Bo wang, Qiang Yang
Experiments on both synthetic and real-world datasets show the framework to be highly effective at detection in both multilingual data and in languages where training samples are scarce.
1 code implementation • 31 Jul 2021 • Zhaoming Kong, Lichao Sun, Hao Peng, Liang Zhan, Yong Chen, Lifang He
In this paper, we propose MGNet, a simple and effective multiplex graph convolutional network (GCN) model for multimodal brain network analysis.
1 code implementation • ACL 2022 • Alexis Ross, Tongshuang Wu, Hao Peng, Matthew E. Peters, Matt Gardner
We craft a set of operations to modify the control codes, which in turn steer generation towards targeted attributes.
no code implementations • 5 Jul 2021 • Qian Li, JianXin Li, Jiawei Sheng, Shiyao Cui, Jia Wu, Yiming Hei, Hao Peng, Shu Guo, Lihong Wang, Amin Beheshti, Philip S. Yu
Numerous methods, datasets, and evaluation metrics have been proposed in the literature, raising the need for a comprehensive and updated survey.
1 code implementation • 3 Jul 2021 • Hao Peng, Pei Chen, Rui Liu, Luonan Chen
Making predictions in a robust way is a difficult task only based on the observed data of a nonlinear system.
1 code implementation • 23 Jun 2021 • Qian Li, Hao Peng, JianXin Li, Jia Wu, Yuanxing Ning, Lihong Wang, Philip S. Yu, Zheng Wang
Our approach leverages knowledge of the already extracted arguments of the same sentence to determine the role of arguments that would be difficult to decide individually.
no code implementations • 28 May 2021 • Junnan Liu, Qianren Mao, Bang Liu, Hao Peng, Hongdong Zhu, JianXin Li
In this paper, we argue that this limitation can be overcome by a semi-supervised approach: consistency training which is to leverage large amounts of unlabeled data to improve the performance of supervised learning over a small corpus.
1 code implementation • 22 May 2021 • JianXin Li, Xingcheng Fu, Hao Peng, Senzhang Wang, Shijie Zhu, Qingyun Sun, Philip S. Yu, Lifang He
With the prevalence of graph data in real-world applications, many methods have been proposed in recent years to learn high-quality graph embedding vectors various types of graphs.
1 code implementation • 17 May 2021 • Hao Peng, Haoran Li, Yangqiu Song, Vincent Zheng, JianXin Li
However, for multiple cross-domain knowledge graphs, state-of-the-art embedding models cannot make full use of the data from different knowledge domains while preserving the privacy of exchanged data.
1 code implementation • 7 May 2021 • Gongxu Luo, JianXin Li, Jianlin Su, Hao Peng, Carl Yang, Lichao Sun, Philip S. Yu, Lifang He
Based on them, we design MinGE to directly calculate the ideal node embedding dimension for any graph.
no code implementations • 4 May 2021 • Sicong Che, Hao Peng, Lichao Sun, Yong Chen, Lifang He
This paper aims to provide a generic Federated Multi-View Learning (FedMV) framework for multi-view data leakage prevention, which is based on different types of local data availability and enables to accommodate two types of problems: Vertical Federated Multi-View Learning (V-FedMV) and Horizontal Federated Multi-View Learning (H-FedMV).
1 code implementation • 16 Apr 2021 • JianXin Li, Hao Peng, Yuwei Cao, Yingtong Dou, Hekai Zhang, Philip S. Yu, Lifang He
Furthermore, they cannot fully capture the content-based correlations between nodes, as they either do not use the self-attention mechanism or only use it to consider the immediate neighbors of each node, ignoring the higher-order neighbors.
1 code implementation • 16 Apr 2021 • Hao Peng, Ruitong Zhang, Yingtong Dou, Renyu Yang, Jingyi Zhang, Philip S. Yu
To avoid the embedding over-assimilation among different types of nodes, we employ a label-aware neural similarity measure to ascertain the most similar neighbors based on node attributes.
Ranked #3 on
Node Classification
on Amazon-Fraud
1 code implementation • NAACL 2021 • Zhongfen Deng, Hao Peng, Dongxiao He, JianXin Li, Philip S. Yu
The second one encourages the structure encoder to learn better representations with desired characteristics for all labels which can better handle label imbalance in hierarchical text classification.
no code implementations • 6 Apr 2021 • Li Sun, Zhongbao Zhang, Jiawei Zhang, Feiyang Wang, Hao Peng, Sen Su, Philip S. Yu
To model the uncertainty, we devise a hyperbolic graph variational autoencoder built upon the proposed TGNN to generate stochastic node representations of hyperbolic normal distributions.
1 code implementation • 2 Apr 2021 • Hao Peng, JianXin Li, Yangqiu Song, Renyu Yang, Rajiv Ranjan, Philip S. Yu, Lifang He
Third, we propose a streaming social event detection and evolution discovery framework for HINs based on meta-path similarity search, historical information about meta-paths, and heterogeneous DBSCAN clustering method.
1 code implementation • EMNLP 2021 • Jungo Kasai, Hao Peng, Yizhe Zhang, Dani Yogatama, Gabriel Ilharco, Nikolaos Pappas, Yi Mao, Weizhu Chen, Noah A. Smith
Specifically, we propose a swap-then-finetune procedure: in an off-the-shelf pretrained transformer, we replace the softmax attention with its linear-complexity recurrent alternative and then finetune.
Ranked #2 on
Machine Translation
on WMT2017 Chinese-English
no code implementations • 16 Mar 2021 • Yiying Yang, Xi Yin, Haiqin Yang, Xingjian Fei, Hao Peng, Kaijie Zhou, Kunfeng Lai, Jianping Shen
Entity synonyms discovery is crucial for entity-leveraging applications.
no code implementations • ICLR 2021 • Hao Peng, Nikolaos Pappas, Dani Yogatama, Roy Schwartz, Noah A. Smith, Lingpeng Kong
RFA can be used as a drop-in replacement for conventional softmax attention and offers a straightforward way of learning with recency bias through an optional gating mechanism.
Ranked #27 on
Machine Translation
on IWSLT2014 German-English
1 code implementation • 6 Feb 2021 • Xiaohang Xu, Hao Peng, Lichao Sun, Md Zakirul Alam Bhuiyan, Lianzhong Liu, Lifang He
Depression is one of the most common mental illness problems, and the symptoms shown by patients are not consistent, making it difficult to diagnose in the process of clinical practice and pathological research.
2 code implementations • 21 Jan 2021 • Yuwei Cao, Hao Peng, Jia Wu, Yingtong Dou, JianXin Li, Philip S. Yu
The complexity and streaming nature of social messages make it appealing to address social event detection in an incremental learning setting, where acquiring, preserving, and extending knowledge are major concerns.
1 code implementation • 20 Jan 2021 • Qingyun Sun, JianXin Li, Hao Peng, Jia Wu, Yuanxing Ning, Phillip S. Yu, Lifang He
Graph representation learning has attracted increasing research attention.
no code implementations • 17 Jan 2021 • Zheng Liu, Xiaohan Li, Hao Peng, Lifang He, Philip S. Yu
EHRs contain multiple entities and relations and can be viewed as a heterogeneous graph.
1 code implementation • 10 Dec 2020 • Zhaofeng Wu, Hao Peng, Noah A. Smith
For natural language processing systems, two kinds of evidence support the use of text representations from neural language models "pretrained" on large unannotated corpora: performance on application-inspired benchmarks (Peters et al., 2018, inter alia), and the emergence of syntactic abstractions in those representations (Tenney et al., 2019, inter alia).
1 code implementation • COLING 2020 • Zhongfen Deng, Hao Peng, Congying Xia, JianXin Li, Lifang He, Philip S. Yu
Review rating prediction of text reviews is a rapidly growing technology with a wide range of applications in natural language processing.
1 code implementation • EMNLP 2020 • Hao Peng, Tianyu Gao, Xu Han, Yankai Lin, Peng Li, Zhiyuan Liu, Maosong Sun, Jie zhou
We find that (i) while context is the main source to support the predictions, RE models also heavily rely on the information from entity mentions, most of which is type information, and (ii) existing datasets may leak shallow heuristics via entity mentions and thus contribute to the high performance on RE benchmarks.
Ranked #23 on
Relation Extraction
on TACRED
no code implementations • NeurIPS 2020 • Hu Liu, Jing Lu, Xiwei Zhao, Sulong Xu, Hao Peng, Yutong Liu, Zehua Zhang, Jian Li, Junsheng Jin, Yongjun Bao, Weipeng Yan
First, conventional attentions mostly limit the attention field only to a single user's behaviors, which is not suitable in e-commerce where users often hunt for new demands that are irrelevant to any historical behaviors.
1 code implementation • 26 Sep 2020 • Ye Liu, Yao Wan, Lifang He, Hao Peng, Philip S. Yu
To promote the ability of commonsense reasoning for text generation, we propose a novel knowledge graph augmented pre-trained language generation model KG-BART, which encompasses the complex relations of concepts through the knowledge graph and produces more logical and natural sentences as output.
1 code implementation • NAACL 2021 • Dianqi Li, Yizhe Zhang, Hao Peng, Liqun Chen, Chris Brockett, Ming-Ting Sun, Bill Dolan
Adversarial examples expose the vulnerabilities of natural language processing (NLP) models, and can be used to evaluate and improve their robustness.
no code implementations • 30 Aug 2020 • Qingyun Sun, Hao Peng, Jian-Xin Li, Senzhang Wang, Xiangyu Dong, Liangxuan Zhao, Philip S. Yu, Lifang He
Although these attributes may change, an author's co-authors and research topics do not change frequently with time, which means that papers within a period have similar text and relation information in the academic network.
6 code implementations • 19 Aug 2020 • Yingtong Dou, Zhiwei Liu, Li Sun, Yutong Deng, Hao Peng, Philip S. Yu
Finally, the selected neighbors across different relations are aggregated together.
Ranked #5 on
Fraud Detection
on Amazon-Fraud
1 code implementation • 12 Aug 2020 • Hao Peng, Jian-Xin Li, Zheng Wang, Renyu Yang, Mingzhe Liu, Mingming Zhang, Philip S. Yu, Lifang He
As a departure from prior work, Luce organizes the house data in a heterogeneous information network (HIN) where graph nodes are house entities and attributes that are important for house price valuation.
1 code implementation • 9 Aug 2020 • Shijie Zhu, JianXin Li, Hao Peng, Senzhang Wang, Lifang He
To capture the directed edges between nodes, existing methods mostly learn two embedding vectors for each node, source vector and target vector.
2 code implementations • 2 Aug 2020 • Qian Li, Hao Peng, Jian-Xin Li, Congying Xia, Renyu Yang, Lichao Sun, Philip S. Yu, Lifang He
The last decade has seen a surge of research in this area due to the unprecedented success of deep learning.
no code implementations • ACL 2020 • Hao Peng, Roy Schwartz, Dianqi Li, Noah A. Smith
Multi-head attentive neural architectures have achieved state-of-the-art results on a variety of natural language processing tasks.
2 code implementations • 23 Jun 2020 • Shen Wang, Jibing Gong, Jinlong Wang, Wenzheng Feng, Hao Peng, Jie Tang, Philip S. Yu
To address this issue, we leverage both content information and context information to learn the representation of entities via graph convolution network.
2 code implementations • ICLR 2021 • Jungo Kasai, Nikolaos Pappas, Hao Peng, James Cross, Noah A. Smith
We show that the speed disadvantage for autoregressive baselines compared to non-autoregressive methods has been overestimated in three aspects: suboptimal layer allocation, insufficient speed measurement, and lack of knowledge distillation.
no code implementations • 18 Jun 2020 • Hu Liu, Jing Lu, Hao Yang, Xiwei Zhao, Sulong Xu, Hao Peng, Zehua Zhang, Wenjie Niu, Xiaokun Zhu, Yongjun Bao, Weipeng Yan
Existing algorithms usually extract visual features using off-the-shelf Convolutional Neural Networks (CNNs) and late fuse the visual and non-visual features for the finally predicted CTR.
1 code implementation • 10 Jun 2020 • Chen Li, Xutan Peng, Hao Peng, Jian-Xin Li, Lihong Wang, Philip S. Yu, Lifang He
Recently, graph-based algorithms have drawn much attention because of their impressive success in semi-supervised setups.
1 code implementation • 16 May 2020 • Hao Peng, Pei Chen, Rui Liu
Making accurate multi-step-ahead prediction for a complex system is a challenge for many practical applications, especially when only short-term time-series data are available.
no code implementations • 13 May 2020 • Hao Peng, Roy Schwartz, Dianqi Li, Noah A. Smith
Multi-head attentive neural architectures have achieved state-of-the-art results on a variety of natural language processing tasks.
1 code implementation • 1 May 2020 • Zhiwei Liu, Yingtong Dou, Philip S. Yu, Yutong Deng, Hao Peng
In this paper, we introduce these inconsistencies and design a new GNN framework, $\mathsf{GraphConsis}$, to tackle the inconsistency problem: (1) for the context inconsistency, we propose to combine the context embeddings with node features, (2) for the feature inconsistency, we design a consistency score to filter the inconsistent neighbors and generate corresponding sampling probability, and (3) for the relation inconsistency, we learn a relation attention weights associated with the sampled nodes.
no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Xu Han, Tianyu Gao, Yankai Lin, Hao Peng, Yaoliang Yang, Chaojun Xiao, Zhiyuan Liu, Peng Li, Maosong Sun, Jie zhou
Relational facts are an important component of human knowledge, which are hidden in vast amounts of text.
1 code implementation • 8 Dec 2019 • Xudong Liu, Ruizhe Wang, Chih-Fan Chen, Minglei Yin, Hao Peng, Shukhan Ng, Xin Li
Inspired by the latest advances in style-based synthesis and face beauty prediction, we propose a novel framework of face beautification.
no code implementations • 7 Dec 2019 • Ruizhe Wang, Chih-Fan Chen, Hao Peng, Xudong Liu, Oliver Liu, Xin Li
We present an approach to generate high fidelity 3D face avatar with a high-resolution UV texture map from a single image.
1 code implementation • 18 Nov 2019 • JianXin Li, Cheng Ji, Hao Peng, Yu He, Yangqiu Song, Xinmiao Zhang, Fanzhang Peng
However, despite the success of current random-walk-based methods, most of them are usually not expressive enough to preserve the personalized higher-order proximity and lack a straightforward objective to theoretically articulate what and how network proximity is preserved.
no code implementations • 7 Sep 2019 • Yu He, Yangqiu Song, Jian-Xin Li, Cheng Ji, Jian Peng, Hao Peng
Heterogeneous information network (HIN) embedding has gained increasing interests recently.
1 code implementation • IJCNLP 2019 • Jesse Dodge, Roy Schwartz, Hao Peng, Noah A. Smith
Our method also highlights the interpretable properties of rational RNNs.
1 code implementation • IJCNLP 2019 • Hao Peng, Roy Schwartz, Noah A. Smith
We present PaLM, a hybrid parser and neural language model.
1 code implementation • 9 Jun 2019 • Hao Peng, Jian-Xin Li, Qiran Gong, Senzhang Wang, Lifang He, Bo Li, Lihong Wang, Philip S. Yu
In this paper, we propose a novel hierarchical taxonomy-aware and attentional graph capsule recurrent CNNs framework for large-scale multi-label text classification.
1 code implementation • 9 Jun 2019 • Hao Peng, Jian-Xin Li, Hao Yan, Qiran Gong, Senzhang Wang, Lin Liu, Lihong Wang, Xiang Ren
Most existing methods focus on learning the structural representations of vertices in a static network, but cannot guarantee an accurate and efficient embedding in a dynamic network scenario.
1 code implementation • 9 Jun 2019 • Hao Peng, Jian-Xin Li, Qiran Gong, Yangqiu Song, Yuanxing Ning, Kunfeng Lai, Philip S. Yu
In this paper, we design an event meta-schema to characterize the semantic relatedness of social events and build an event-based heterogeneous information network (HIN) integrating information from external knowledge base, and propose a novel Pair-wise Popularity Graph Convolutional Network (PP-GCN) based fine-grained social event categorization model.
no code implementations • NAACL 2019 • Hao Peng, Ankur P. Parikh, Manaal Faruqui, Bhuwan Dhingra, Dipanjan Das
We propose a novel conditioned text generation model.
no code implementations • 30 Jan 2019 • Xudong Liu, Tao Li, Hao Peng, Iris Chuoying Ouyang, Taehwan Kim, Ruizhe Wang
The concept of beauty has been debated by philosophers and psychologists for centuries, but most definitions are subjective and metaphysical, and deficit in accuracy, generality, and scalability.
no code implementations • 11 Nov 2018 • Hao Peng, Jian-Xin Li, Qiran Gong, Senzhang Wang, Yuanxing Ning, Philip S. Yu
Different from previous convolutional neural networks on graphs, we first design a motif-matching guided subgraph normalization method to capture neighborhood information.
1 code implementation • 14 Oct 2018 • Chen Li, Xutan Peng, Shanghang Zhang, Hao Peng, Philip S. Yu, Min He, Linfeng Du, Lihong Wang
By treating relations and multi-hop paths as two different input sources, we use a feature extractor, which is shared by two downstream components (i. e. relation classifier and source discriminator), to capture shared/similar information between them.
1 code implementation • EMNLP 2018 • Hao Peng, Roy Schwartz, Sam Thomson, Noah A. Smith
We characterize this connection formally, defining rational recurrences to be recurrent hidden state update functions that can be written as the Forward calculation of a finite set of WFSAs.
no code implementations • 16 Jun 2018 • Chao Yang, Taehwan Kim, Ruizhe Wang, Hao Peng, C. -C. Jay Kuo
It has been applied to numerous domains, such as data augmentation, domain adaptation, and unsupervised training.
1 code implementation • ACL 2018 • Hao Peng, Sam Thomson, Noah A. Smith
We introduce the structured projection of intermediate gradients optimization technique (SPIGOT), a new method for backpropagating through neural networks that include hard-decision structured predictions (e. g., parsing) in intermediate layers.
2 code implementations • NAACL 2018 • Hao Peng, Sam Thomson, Swabha Swayamdipta, Noah A. Smith
We present a new approach to learning semantic parsers from multiple datasets, even when the target semantic formalisms are drastically different, and the underlying corpora do not overlap.
no code implementations • 23 Feb 2018 • Chenhao Tan, Hao Peng, Noah A. Smith
We first examine the effect of wording and propose a binary classification framework that controls for both the speaker and the debate situation.
no code implementations • 15 Jan 2018 • Hao Peng, Xiaoli Bai
Due to the lack of information such as the space environment condition and resident space objects' (RSOs') body characteristics, current orbit predictions that are solely grounded on physics-based models may fail to achieve required accuracy for collision avoidance and have led to satellite collisions already.
1 code implementation • EMNLP 2017 • Xiao Zhang, Yong Jiang, Hao Peng, Kewei Tu, Dan Goldwasser
In this paper we propose an end-to-end neural CRF autoencoder (NCRF-AE) model for semi-supervised learning of sequential structured prediction problems.
1 code implementation • ACL 2017 • Hao Peng, Sam Thomson, Noah A. Smith
We present a deep neural architecture that parses sentences into three semantic dependency graph formalisms.
no code implementations • ICML 2017 • Hao Peng, Shandian Zhe, Yuan Qi
Gaussian processes (GPs) are powerful non-parametric function estimators.
5 code implementations • 9 Feb 2016 • Miltiadis Allamanis, Hao Peng, Charles Sutton
Attention mechanisms in neural networks have proved useful for problems in which the input and output do not have fixed dimension.