Search Results for author: Hao Peng

Found 159 papers, 101 papers with code

Environment-Aware Dynamic Graph Learning for Out-of-Distribution Generalization

1 code implementation18 Nov 2023 Haonan Yuan, Qingyun Sun, Xingcheng Fu, Ziwei Zhang, Cheng Ji, Hao Peng, JianXin Li

To the best of our knowledge, we are the first to study OOD generalization on dynamic graphs from the environment learning perspective.

Graph Learning Out-of-Distribution Generalization

Measuring and Improving Attentiveness to Partial Inputs with Counterfactuals

no code implementations16 Nov 2023 Yanai Elazar, Bhargavi Paranjape, Hao Peng, Sarah Wiegreffe, Khyathi Raghavi, Vivek Srikumar, Sameer Singh, Noah A. Smith

Previous work has found that datasets with paired inputs are prone to correlations between a specific part of the input (e. g., the hypothesis in NLI) and the label; consequently, models trained only on those outperform chance.

Natural Language Inference Reading Comprehension

Prudent Silence or Foolish Babble? Examining Large Language Models' Responses to the Unknown

no code implementations16 Nov 2023 Genglin Liu, Xingyao Wang, Lifan Yuan, Yangyi Chen, Hao Peng

When presented with such unanswerable questions, an LLM should appropriately convey uncertainty, and be able to challenge the premise and refuse to generate a response.

Question Answering valid

MAVEN-Arg: Completing the Puzzle of All-in-One Event Understanding Dataset with Event Argument Annotation

no code implementations15 Nov 2023 Xiaozhi Wang, Hao Peng, Yong Guan, Kaisheng Zeng, Jianhui Chen, Lei Hou, Xu Han, Yankai Lin, Zhiyuan Liu, Ruobing Xie, Jie zhou, Juanzi Li

Understanding events in texts is a core objective of natural language understanding, which requires detecting event occurrences, extracting event arguments, and analyzing inter-event relationships.

Event Argument Extraction Event Detection +3

When does In-context Learning Fall Short and Why? A Study on Specification-Heavy Tasks

no code implementations15 Nov 2023 Hao Peng, Xiaozhi Wang, Jianhui Chen, Weikai Li, Yunjia Qi, Zimu Wang, Zhili Wu, Kaisheng Zeng, Bin Xu, Lei Hou, Juanzi Li

In this paper, we find that ICL falls short of handling specification-heavy tasks, which are tasks with complicated and extensive task specifications, requiring several hours for ordinary humans to master, such as traditional information extraction tasks.

JPAVE: A Generation and Classification-based Model for Joint Product Attribute Prediction and Value Extraction

1 code implementation7 Nov 2023 Zhongfen Deng, Hao Peng, Tao Zhang, Shuaiqi Liu, Wenting Zhao, Yibo Wang, Philip S. Yu

Furthermore, the copy mechanism in value generator and the value attention module in value classifier help our model address the data discrepancy issue by only focusing on the relevant part of input text and ignoring other information which causes the discrepancy issue such as sentence structure in the text.

Attribute Value Extraction Multi-Task Learning +1

MultiSPANS: A Multi-range Spatial-Temporal Transformer Network for Traffic Forecast via Structural Entropy Optimization

1 code implementation6 Nov 2023 Dongcheng Zou, Senzhang Wang, Xuefeng Li, Hao Peng, Yuandong Wang, Chunyang Liu, Kehua Sheng, Bo Zhang

Based on this, we propose a relative structural entropy-based position encoding and a multi-head attention masking scheme based on multi-layer encoding trees.

Management Time Series +1

Uncertainty-guided Boundary Learning for Imbalanced Social Event Detection

1 code implementation30 Oct 2023 Jiaqian Ren, Hao Peng, Lei Jiang, Zhiwei Liu, Jia Wu, Zhengtao Yu, Philip S. Yu

While in our observation, compared to the rarity of classes, the calibrated uncertainty estimated from well-trained evidential deep learning networks better reflects model performance.

Contrastive Learning Event Detection

Language Models Hallucinate, but May Excel at Fact Verification

1 code implementation23 Oct 2023 Jian Guan, Jesse Dodge, David Wadden, Minlie Huang, Hao Peng

Recent progress in natural language processing (NLP) owes much to remarkable advances in large language models (LLMs).

Fact Verification

Knowledge Graph Context-Enhanced Diversified Recommendation

1 code implementation20 Oct 2023 Xiaolong Liu, Liangwei Yang, Zhiwei Liu, Mingdai Yang, Chen Wang, Hao Peng, Philip S. Yu

Collectively, our contributions signify a substantial stride towards augmenting the panorama of recommendation diversity within the realm of KG-informed RecSys paradigms.

Knowledge Graphs Recommendation Systems

Multi-omics Sampling-based Graph Transformer for Synthetic Lethality Prediction

no code implementations17 Oct 2023 Xusheng Zhao, Hao liu, Qiong Dai, Hao Peng, Xu Bai, Huailiang Peng

We showcase the effectiveness of MSGT-SL on real-world SL tasks, demonstrating the empirical benefits gained from the graph transformer and multi-omics data.

Edge Classification

TRAM: Bridging Trust Regions and Sharpness Aware Minimization

1 code implementation5 Oct 2023 Tom Sherborne, Naomi Saphra, Pradeep Dasigi, Hao Peng

We find that TRAM outperforms both sharpness-aware and trust region-based optimization methods on cross-domain language modeling and cross-lingual transfer, where robustness to domain transfer and representation generality are critical for success.

Cross-Lingual Transfer Domain Generalization +1

CRAFT: Customizing LLMs by Creating and Retrieving from Specialized Toolsets

1 code implementation29 Sep 2023 Lifan Yuan, Yangyi Chen, Xingyao Wang, Yi R. Fung, Hao Peng, Heng Ji

It creates toolsets specifically curated for the tasks and equips LLMs with a component that retrieves tools from these sets to enhance their capability to solve complex tasks.

Language Modelling Mathematical Reasoning

OmniEvent: A Comprehensive, Fair, and Easy-to-Use Toolkit for Event Understanding

1 code implementation25 Sep 2023 Hao Peng, Xiaozhi Wang, Feng Yao, Zimu Wang, Chuzhao Zhu, Kaisheng Zeng, Lei Hou, Juanzi Li

Event understanding aims at understanding the content and relationship of events within texts, which covers multiple complicated information extraction tasks: event detection, event argument extraction, and event relation extraction.

Event Argument Extraction Event Detection +2

MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback

no code implementations19 Sep 2023 Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, Heng Ji

However, current evaluation protocols often emphasize benchmark performance with single-turn exchanges, neglecting the nuanced interactions among the user, LLMs, and external tools, while also underestimating the importance of natural language feedback from users.

Decision Making

Unsupervised Skin Lesion Segmentation via Structural Entropy Minimization on Multi-Scale Superpixel Graphs

1 code implementation5 Sep 2023 Guangjie Zeng, Hao Peng, Angsheng Li, Zhiwei Liu, Chunyang Liu, Philip S. Yu, Lifang He

In this work, we propose a novel unsupervised Skin Lesion sEgmentation framework based on structural entropy and isolation forest outlier Detection, namely SLED.

Lesion Segmentation Outlier Detection +2

Multi-task Item-attribute Graph Pre-training for Strict Cold-start Item Recommendation

1 code implementation26 Jun 2023 Yuwei Cao, Liangwei Yang, Chen Wang, Zhiwei Liu, Hao Peng, Chenyu You, Philip S. Yu

We explore the role of the fine-grained item attributes in bridging the gaps between the existing and the SCS items and pre-train a knowledgeable item-attribute graph for SCS item recommendation.

Multi-Task Learning Recommendation Systems

Addressing the Rank Degeneration in Sequential Recommendation via Singular Spectrum Smoothing

no code implementations21 Jun 2023 Ziwei Fan, Zhiwei Liu, Hao Peng, Philip S. Yu

We also establish a correlation between the ranks of sequence and item embeddings and the rank of the user-item preference prediction matrix, which can affect recommendation diversity.

Sequential Recommendation

The Devil is in the Details: On the Pitfalls of Event Extraction Evaluation

1 code implementation12 Jun 2023 Hao Peng, Xiaozhi Wang, Feng Yao, Kaisheng Zeng, Lei Hou, Juanzi Li, Zhiyuan Liu, Weixing Shen

In this paper, we check the reliability of EE evaluations and identify three major pitfalls: (1) The data preprocessing discrepancy makes the evaluation results on the same dataset not directly comparable, but the data preprocessing details are not widely noted and specified in papers.

Event Argument Extraction Event Detection +1

Chain-of-Thought Hub: A Continuous Effort to Measure Large Language Models' Reasoning Performance

1 code implementation26 May 2023 Yao Fu, Litu Ou, Mingyu Chen, Yuhao Wan, Hao Peng, Tushar Khot

As large language models (LLMs) are continuously being developed, their evaluation becomes increasingly important yet challenging.

Improving Language Model Negotiation with Self-Play and In-Context Learning from AI Feedback

1 code implementation17 May 2023 Yao Fu, Hao Peng, Tushar Khot, Mirella Lapata

We study whether multiple large language models (LLMs) can autonomously improve each other in a negotiation game by playing, reflecting, and criticizing.

Language Modelling Rolling Shutter Correction

LeTI: Learning to Generate from Textual Interactions

1 code implementation17 May 2023 Xingyao Wang, Hao Peng, Reyhaneh Jabbarvand, Heng Ji

LeTI iteratively fine-tunes the model, using the LM objective, on a concatenation of natural language instructions, LM-generated programs, and textual feedback, which is only provided when the generated program fails to solve the task.

Code Generation Event Argument Extraction

Contrastive Graph Clustering in Curvature Spaces

no code implementations5 May 2023 Li Sun, Feiyang Wang, Junda Ye, Hao Peng, Philip S. Yu

On the other hand, contrastive learning boosts the deep graph clustering but usually struggles in either graph augmentation or hard sample mining.

Clustering Contrastive Learning +1

Hierarchical State Abstraction Based on Structural Information Principles

1 code implementation24 Apr 2023 Xianghua Zeng, Hao Peng, Angsheng Li, Chunyang Liu, Lifang He, Philip S. Yu

State abstraction optimizes decision-making by ignoring irrelevant environmental information in reinforcement learning with rich observations.

Continuous Control Decision Making +1

Hyperbolic Geometric Graph Representation Learning for Hierarchy-imbalance Node Classification

1 code implementation11 Apr 2023 Xingcheng Fu, Yuecen Wei, Qingyun Sun, Haonan Yuan, Jia Wu, Hao Peng, JianXin Li

We find that training labeled nodes with different hierarchical properties have a significant impact on the node classification tasks and confirm it in our experiments.

Graph Representation Learning Node Classification

Graph Collaborative Signals Denoising and Augmentation for Recommendation

1 code implementation6 Apr 2023 Ziwei Fan, Ke Xu, Zhang Dong, Hao Peng, Jiawei Zhang, Philip S. Yu

Moreover, we show that the inclusion of user-user and item-item correlations can improve recommendations for users with both abundant and insufficient interactions.

Collaborative Filtering Denoising +1

Effective and Stable Role-Based Multi-Agent Collaboration by Structural Information Principles

1 code implementation3 Apr 2023 Xianghua Zeng, Hao Peng, Angsheng Li

Role-based learning is a promising approach to improving the performance of Multi-Agent Reinforcement Learning (MARL).

Multi-agent Reinforcement Learning Starcraft +1

Reinforcement Learning Guided Multi-Objective Exam Paper Generation

1 code implementation2 Mar 2023 Yuhu Shang, Xuexiong Luo, Lihong Wang, Hao Peng, Xiankun Zhang, Yimeng Ren, Kun Liang

To reduce the repetitive and complex work of instructors, exam paper generation (EPG) technique has become a salient topic in the intelligent education field, which targets at generating high-quality exam paper automatically according to instructor-specified assessment criteria.

Knowledge Tracing Paper generation +2

A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT

no code implementations18 Feb 2023 Ce Zhou, Qian Li, Chen Li, Jun Yu, Yixin Liu, Guangjing Wang, Kai Zhang, Cheng Ji, Qiben Yan, Lifang He, Hao Peng, JianXin Li, Jia Wu, Ziwei Liu, Pengtao Xie, Caiming Xiong, Jian Pei, Philip S. Yu, Lichao Sun

This study provides a comprehensive review of recent research advancements, challenges, and opportunities for PFMs in text, image, graph, as well as other data modalities.

Graph Learning Language Modelling +1

A Comprehensive Survey on Automatic Knowledge Graph Construction

no code implementations10 Feb 2023 Lingfeng Zhong, Jia Wu, Qian Li, Hao Peng, Xindong Wu

A knowledge graph is built in three steps: knowledge acquisition, knowledge refinement, and knowledge evolution.

graph construction

Specializing Smaller Language Models towards Multi-Step Reasoning

2 code implementations30 Jan 2023 Yao Fu, Hao Peng, Litu Ou, Ashish Sabharwal, Tushar Khot

by paying the price of decreased generic ability, we can clearly lift up the scaling curve of models smaller than 10B towards a specialized multi-step math reasoning ability.

Model Selection

Unbiased and Efficient Self-Supervised Incremental Contrastive Learning

1 code implementation28 Jan 2023 Cheng Ji, JianXin Li, Hao Peng, Jia Wu, Xingcheng Fu, Qingyun Sun, Phillip S. Yu

Contrastive Learning (CL) has been proved to be a powerful self-supervised approach for a wide range of domains, including computer vision and graph representation learning.

Contrastive Learning Graph Representation Learning +1

Mutual Wasserstein Discrepancy Minimization for Sequential Recommendation

1 code implementation28 Jan 2023 Ziwei Fan, Zhiwei Liu, Hao Peng, Philip S Yu

Wasserstein Discrepancy Measurement builds upon the 2-Wasserstein distance, which is more robust, more efficient in small batch sizes, and able to model the uncertainty of stochastic augmentation processes.

Contrastive Learning Mutual Information Estimation +2

Parallel Multi-Extended State Observers based {ADRC} with Application to High-Speed Precision Motion Stage

no code implementations18 Jan 2023 Guojie Tang, Wenchao Xue, Hao Peng, Yanlong Zhao, Zhijun Yang

In particular, the algorithm for calculating the tracking error caused by single ESO's estimation error is constructed.


State of the Art and Potentialities of Graph-level Learning

no code implementations14 Jan 2023 Zhenyu Yang, Ge Zhang, Jia Wu, Jian Yang, Quan Z. Sheng, Shan Xue, Chuan Zhou, Charu Aggarwal, Hao Peng, Wenbin Hu, Edwin Hancock, Pietro Liò

Traditional approaches to learning a set of graphs heavily rely on hand-crafted features, such as substructures.

Graph Learning

Self-organization Preserved Graph Structure Learning with Principle of Relevant Information

no code implementations30 Dec 2022 Qingyun Sun, JianXin Li, Beining Yang, Xingcheng Fu, Hao Peng, Philip S. Yu

Most Graph Neural Networks follow the message-passing paradigm, assuming the observed structure depicts the ground-truth node relationships.

Graph structure learning

Self-Supervised Continual Graph Learning in Adaptive Riemannian Spaces

no code implementations30 Nov 2022 Li Sun, Junda Ye, Hao Peng, Feiyang Wang, Philip S. Yu

On the one hand, existing methods work with the zero-curvature Euclidean space, and largely ignore the fact that curvature varies over the coming graph sequence.

Graph Learning

MAVEN-ERE: A Unified Large-scale Dataset for Event Coreference, Temporal, Causal, and Subevent Relation Extraction

1 code implementation14 Nov 2022 Xiaozhi Wang, Yulin Chen, Ning Ding, Hao Peng, Zimu Wang, Yankai Lin, Xu Han, Lei Hou, Juanzi Li, Zhiyuan Liu, Peng Li, Jie zhou

It contains 103, 193 event coreference chains, 1, 216, 217 temporal relations, 57, 992 causal relations, and 15, 841 subevent relations, which is larger than existing datasets of all the ERE tasks by at least an order of magnitude.

Event Relation Extraction Relation Extraction

How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers

1 code implementation7 Nov 2022 Michael Hassid, Hao Peng, Daniel Rotem, Jungo Kasai, Ivan Montero, Noah A. Smith, Roy Schwartz

Our results motivate research on simpler alternatives to input-dependent attention, as well as on methods for better utilization of this mechanism in the Transformer architecture.

Ranking-based Group Identification via Factorized Attention on Social Tripartite Graph

1 code implementation2 Nov 2022 Mingdai Yang, Zhiwei Liu, Liangwei Yang, Xiaolong Liu, Chen Wang, Hao Peng, Philip S. Yu

PA layers efficiently learn the relatedness of non-neighbor nodes to improve the information propagation to users.

Sequential Recommendation with Auxiliary Item Relationships via Multi-Relational Transformer

1 code implementation24 Oct 2022 Ziwei Fan, Zhiwei Liu, Chen Wang, Peijie Huang, Hao Peng, Philip S. Yu

However, it remains a significant challenge to model auxiliary item relationships in SR. To simultaneously model high-order item-item transitions in sequences and auxiliary item relationships, we propose a Multi-relational Transformer capable of modeling auxiliary item relationships for SR (MT4SR).

Sequential Recommendation

DAGAD: Data Augmentation for Graph Anomaly Detection

1 code implementation18 Oct 2022 Fanzhen Liu, Xiaoxiao Ma, Jia Wu, Jian Yang, Shan Xue, Amin Beheshti, Chuan Zhou, Hao Peng, Quan Z. Sheng, Charu C. Aggarwal

To bridge the gaps, this paper devises a novel Data Augmentation-based Graph Anomaly Detection (DAGAD) framework for attributed graphs, equipped with three specially designed modules: 1) an information fusion module employing graph neural network encoders to learn representations, 2) a graph data augmentation module that fertilizes the training set with generated samples, and 3) an imbalance-tailored learning module to discriminate the distributions of the minority (anomalous) and majority (normal) classes.

Data Augmentation Graph Anomaly Detection

Modeling Context With Linear Attention for Scalable Document-Level Translation

1 code implementation16 Oct 2022 Zhaofeng Wu, Hao Peng, Nikolaos Pappas, Noah A. Smith

Document-level machine translation leverages inter-sentence dependencies to produce more coherent and consistent translations.

Document Level Machine Translation Document Translation +3

Transparency Helps Reveal When Language Models Learn Meaning

1 code implementation14 Oct 2022 Zhaofeng Wu, William Merrill, Hao Peng, Iz Beltagy, Noah A. Smith

Many current NLP systems are built from language models trained to optimize unsupervised objectives on large amounts of raw text.

Complexity-Based Prompting for Multi-Step Reasoning

no code implementations3 Oct 2022 Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, Tushar Khot

In this work, we propose complexity-based prompting, a simple and effective example selection scheme for multi-step reasoning.

Date Understanding GSM8K +1

Heterogeneous Graph Neural Network for Privacy-Preserving Recommendation

1 code implementation2 Oct 2022 Yuecen Wei, Xingcheng Fu, Qingyun Sun, Hao Peng, Jia Wu, Jinyan Wang, Xianxian Li

To address this issue, we propose a novel heterogeneous graph neural network privacy-preserving method based on a differential privacy mechanism named HeteDP, which provides a double guarantee on graph features and topology.

Privacy Preserving

Information Extraction and Human-Robot Dialogue towards Real-life Tasks: A Baseline Study with the MobileCS Dataset

1 code implementation27 Sep 2022 Hong Liu, Hao Peng, Zhijian Ou, Juanzi Li, Yi Huang, Junlan Feng

Recently, there have merged a class of task-oriented dialogue (TOD) datasets collected through Wizard-of-Oz simulated games.

Cross-Network Social User Embedding with Hybrid Differential Privacy Guarantees

1 code implementation4 Sep 2022 Jiaqian Ren, Lei Jiang, Hao Peng, Lingjuan Lyu, Zhiwei Liu, Chaochao Chen, Jia Wu, Xu Bai, Philip S. Yu

Integrating multiple online social networks (OSNs) has important implications for many downstream social mining tasks, such as user preference modelling, recommendation, and link prediction.

Link Prediction Network Embedding +1

A Self-supervised Riemannian GNN with Time Varying Curvature for Temporal Graph Learning

no code implementations30 Aug 2022 Li Sun, Junda Ye, Hao Peng, Philip S. Yu

To bridge this gap, we make the first attempt to study the problem of self-supervised temporal graph representation learning in the general Riemannian space, supporting the time-varying curvature to shift among hyperspherical, Euclidean and hyperbolic spaces.

Graph Learning Graph Representation Learning +1

Position-aware Structure Learning for Graph Topology-imbalance by Relieving Under-reaching and Over-squashing

1 code implementation17 Aug 2022 Qingyun Sun, JianXin Li, Haonan Yuan, Xingcheng Fu, Hao Peng, Cheng Ji, Qian Li, Philip S. Yu

Topology-imbalance is a graph-specific imbalance problem caused by the uneven topology positions of labeled nodes, which significantly damages the performance of GNNs.

Graph Learning Graph structure learning +1

Automating DBSCAN via Deep Reinforcement Learning

2 code implementations9 Aug 2022 Ruitong Zhang, Hao Peng, Yingtong Dou, Jia Wu, Qingyun Sun, Jingyi Zhang, Philip S. Yu

DBSCAN is widely used in many scientific and engineering fields because of its simplicity and practicality.

Clustering reinforcement-learning +2

A Challenge on Semi-Supervised and Reinforced Task-Oriented Dialog Systems

1 code implementation6 Jul 2022 Zhijian Ou, Junlan Feng, Juanzi Li, Yakun Li, Hong Liu, Hao Peng, Yi Huang, Jiangjiang Zhao

A challenge on Semi-Supervised and Reinforced Task-Oriented Dialog Systems, Co-located with EMNLP2022 SereTOD Workshop.

BOND: Benchmarking Unsupervised Outlier Node Detection on Static Attributed Graphs

2 code implementations21 Jun 2022 Kay Liu, Yingtong Dou, Yue Zhao, Xueying Ding, Xiyang Hu, Ruitong Zhang, Kaize Ding, Canyu Chen, Hao Peng, Kai Shu, Lichao Sun, Jundong Li, George H. Chen, Zhihao Jia, Philip S. Yu

To bridge this gap, we present--to the best of our knowledge--the first comprehensive benchmark for unsupervised outlier node detection on static attributed graphs called BOND, with the following highlights.

Anomaly Detection Benchmarking +2

Graph-level Neural Networks: Current Progress and Future Directions

no code implementations31 May 2022 Ge Zhang, Jia Wu, Jian Yang, Shan Xue, Wenbin Hu, Chuan Zhou, Hao Peng, Quan Z. Sheng, Charu Aggarwal

To frame this survey, we propose a systematic taxonomy covering GLNNs upon deep neural networks, graph neural networks, and graph pooling.

Evidential Temporal-aware Graph-based Social Event Detection via Dempster-Shafer Theory

no code implementations24 May 2022 Jiaqian Ren, Lei Jiang, Hao Peng, Zhiwei Liu, Jia Wu, Philip S. Yu

To incorporate temporal information into the message passing scheme, we introduce a novel temporal-aware aggregator which assigns weights to neighbours according to an adaptive time exponential decay formula.

Event Detection

Twist Decoding: Diverse Generators Guide Each Other

1 code implementation19 May 2022 Jungo Kasai, Keisuke Sakaguchi, Ronan Le Bras, Hao Peng, Ximing Lu, Dragomir Radev, Yejin Choi, Noah A. Smith

Our extensive evaluations on machine translation and scientific paper summarization demonstrate that Twist decoding substantially outperforms each model decoded in isolation over various scenarios, including cases where domain-specific and general-purpose models are both available.

Machine Translation Text Generation +1

Deep reinforcement learning guided graph neural networks for brain network analysis

no code implementations18 Mar 2022 Xusheng Zhao, Jia Wu, Hao Peng, Amin Beheshti, Jessica J. M. Monaghan, David Mcalpine, Heivet Hernandez-Perez, Mark Dras, Qiong Dai, Yangyang Li, Philip S. Yu, Lifang He

Modern neuroimaging techniques, such as diffusion tensor imaging (DTI) and functional magnetic resonance imaging (fMRI), enable us to model the human brain as a brain network or connectome.

reinforcement-learning Reinforcement Learning (RL) +1

Curvature Graph Generative Adversarial Networks

1 code implementation3 Mar 2022 JianXin Li, Xingcheng Fu, Qingyun Sun, Cheng Ji, Jiajun Tan, Jia Wu, Hao Peng

In this paper, we proposed a novel Curvature Graph Generative Adversarial Networks method, named \textbf{\modelname}, which is the first GAN-based graph representation method in the Riemannian geometric manifold.

Towards Unsupervised Deep Graph Structure Learning

1 code implementation17 Jan 2022 Yixin Liu, Yu Zheng, Daokun Zhang, Hongxu Chen, Hao Peng, Shirui Pan

To solve the unsupervised GSL problem, we propose a novel StrUcture Bootstrapping contrastive LearnIng fraMEwork (SUBLIME for abbreviation) with the aid of self-supervised contrastive learning.

Contrastive Learning Graph structure learning

Sequential Recommendation via Stochastic Self-Attention

1 code implementation16 Jan 2022 Ziwei Fan, Zhiwei Liu, Alice Wang, Zahra Nazari, Lei Zheng, Hao Peng, Philip S. Yu

We further argue that BPR loss has no constraint on positive and sampled negative items, which misleads the optimization.

Sequential Recommendation

Graph Structure Learning with Variational Information Bottleneck

1 code implementation16 Dec 2021 Qingyun Sun, JianXin Li, Hao Peng, Jia Wu, Xingcheng Fu, Cheng Ji, Philip S. Yu

Graph Neural Networks (GNNs) have shown promising results on a broad spectrum of applications.

Graph structure learning

A Self-supervised Mixed-curvature Graph Neural Network

no code implementations10 Dec 2021 Li Sun, Zhongbao Zhang, Junda Ye, Hao Peng, Jiawei Zhang, Sen Su, Philip S. Yu

Instead of working on one single constant-curvature space, we construct a mixed-curvature space via the Cartesian product of multiple Riemannian component spaces and design hierarchical attention mechanisms for learning and fusing the representations across these component spaces.

Contrastive Learning Graph Representation Learning

POLLA: Enhancing the Local Structure Awareness in Long Sequence Spatial-temporal Modeling

1 code implementation TIST 2021 2021 Haoyi Zhou, Hao Peng, Jieqi Peng, Shuai Zhang, JianXin Li

Extensive experiments are conducted on five large-scale datasets, which demonstrate that our method achieves state-of-the-art performance and validates the effectiveness brought by local structure information.

Pre-training Recommender Systems via Reinforced Attentive Multi-relational Graph Neural Network

no code implementations28 Nov 2021 Xiaohan Li, Zhiwei Liu, Stephen Guo, Zheng Liu, Hao Peng, Philip S. Yu, Kannan Achan

In this paper, we propose a novel Reinforced Attentive Multi-relational Graph Neural Network (RAM-GNN) to the pre-train user and item embeddings on the user and item graph prior to the recommendation step.

Recommendation Systems

Federated Social Recommendation with Graph Neural Network

no code implementations21 Nov 2021 Zhiwei Liu, Liangwei Yang, Ziwei Fan, Hao Peng, Philip S. Yu

However, they all require centralized storage of the social links and item interactions of users, which leads to privacy concerns.

Federated Learning Recommendation Systems

Towards Graph Self-Supervised Learning with Contrastive Adjusted Zooming

no code implementations20 Nov 2021 Yizhen Zheng, Ming Jin, Shirui Pan, Yuan-Fang Li, Hao Peng, Ming Li, Zhao Li

To overcome the aforementioned problems, we introduce a novel self-supervised graph representation learning algorithm via Graph Contrastive Adjusted Zooming, namely G-Zoom, to learn node representations by leveraging the proposed adjusted zooming scheme.

Contrastive Learning Graph Representation Learning +1

ACE-HGNN: Adaptive Curvature Exploration Hyperbolic Graph Neural Network

1 code implementation15 Oct 2021 Xingcheng Fu, JianXin Li, Jia Wu, Qingyun Sun, Cheng Ji, Senzhang Wang, Jiajun Tan, Hao Peng, Philip S. Yu

Hyperbolic Graph Neural Networks(HGNNs) extend GNNs to hyperbolic space and thus are more effective to capture the hierarchical structures of graphs in node representation learning.

Graph Learning Multi-agent Reinforcement Learning +1

3D Object Detection Combining Semantic and Geometric Features from Point Clouds

no code implementations10 Oct 2021 Hao Peng, Guofeng Tong, Zheng Li, Yaqi Wang, Yuyuan Shao

The SGNet proposed in this paper has achieved state-of-the-art results for 3D object detection in the KITTI dataset, especially in the detection of small-size objects such as cyclists.

3D Object Detection object-detection

Event Extraction by Associating Event Types and Argument Roles

no code implementations23 Aug 2021 Qian Li, Shu Guo, Jia Wu, JianXin Li, Jiawei Sheng, Lihong Wang, Xiaohan Dong, Hao Peng

It ignores meaningful associations among event types and argument roles, leading to relatively poor performance for less frequent types/roles.

Event Extraction Graph Attention +1

Transferring Knowledge Distillation for Multilingual Social Event Detection

1 code implementation6 Aug 2021 Jiaqian Ren, Hao Peng, Lei Jiang, Jia Wu, Yongxin Tong, Lihong Wang, Xu Bai, Bo wang, Qiang Yang

Experiments on both synthetic and real-world datasets show the framework to be highly effective at detection in both multilingual data and in languages where training samples are scarce.

Cross-Lingual Word Embeddings Event Detection +2

Multiplex Graph Networks for Multimodal Brain Network Analysis

1 code implementation31 Jul 2021 Zhaoming Kong, Lichao Sun, Hao Peng, Liang Zhan, Yong Chen, Lifang He

In this paper, we propose MGNet, a simple and effective multiplex graph convolutional network (GCN) model for multimodal brain network analysis.

A Survey on Deep Learning Event Extraction: Approaches and Applications

no code implementations5 Jul 2021 Qian Li, JianXin Li, Jiawei Sheng, Shiyao Cui, Jia Wu, Yiming Hei, Hao Peng, Shu Guo, Lihong Wang, Amin Beheshti, Philip S. Yu

Numerous methods, datasets, and evaluation metrics have been proposed in the literature, raising the need for a comprehensive and updated survey.

Event Extraction

Spatiotemporal information conversion machine for time-series prediction

1 code implementation3 Jul 2021 Hao Peng, Pei Chen, Rui Liu, Luonan Chen

Making predictions in a robust way is a difficult task only based on the observed data of a nonlinear system.

Causal Inference Time Series +1

Reinforcement Learning-based Dialogue Guided Event Extraction to Exploit Argument Relations

1 code implementation23 Jun 2021 Qian Li, Hao Peng, JianXin Li, Jia Wu, Yuanxing Ning, Lihong Wang, Philip S. Yu, Zheng Wang

Our approach leverages knowledge of the already extracted arguments of the same sentence to determine the role of arguments that would be difficult to decide individually.

Event Extraction Incremental Learning +2

Noised Consistency Training for Text Summarization

no code implementations28 May 2021 Junnan Liu, Qianren Mao, Bang Liu, Hao Peng, Hongdong Zhu, JianXin Li

In this paper, we argue that this limitation can be overcome by a semi-supervised approach: consistency training which is to leverage large amounts of unlabeled data to improve the performance of supervised learning over a small corpus.

Abstractive Text Summarization

A Robust and Generalized Framework for Adversarial Graph Embedding

1 code implementation22 May 2021 JianXin Li, Xingcheng Fu, Hao Peng, Senzhang Wang, Shijie Zhu, Qingyun Sun, Philip S. Yu, Lifang He

With the prevalence of graph data in real-world applications, many methods have been proposed in recent years to learn high-quality graph embedding vectors various types of graphs.

Graph Embedding Graph Mining +3

Differentially Private Federated Knowledge Graphs Embedding

1 code implementation17 May 2021 Hao Peng, Haoran Li, Yangqiu Song, Vincent Zheng, JianXin Li

However, for multiple cross-domain knowledge graphs, state-of-the-art embedding models cannot make full use of the data from different knowledge domains while preserving the privacy of exchanged data.

Knowledge Graph Embedding Knowledge Graphs +3

Federated Multi-View Learning for Private Medical Data Integration and Analysis

no code implementations4 May 2021 Sicong Che, Hao Peng, Lichao Sun, Yong Chen, Lifang He

This paper aims to provide a generic Federated Multi-View Learning (FedMV) framework for multi-view data leakage prevention, which is based on different types of local data availability and enables to accommodate two types of problems: Vertical Federated Multi-View Learning (V-FedMV) and Horizontal Federated Multi-View Learning (H-FedMV).

Data Integration Federated Learning +2

Higher-Order Attribute-Enhancing Heterogeneous Graph Neural Networks

1 code implementation16 Apr 2021 JianXin Li, Hao Peng, Yuwei Cao, Yingtong Dou, Hekai Zhang, Philip S. Yu, Lifang He

Furthermore, they cannot fully capture the content-based correlations between nodes, as they either do not use the self-attention mechanism or only use it to consider the immediate neighbors of each node, ignoring the higher-order neighbors.

Clustering Node Classification +2

Reinforced Neighborhood Selection Guided Multi-Relational Graph Neural Networks

1 code implementation16 Apr 2021 Hao Peng, Ruitong Zhang, Yingtong Dou, Renyu Yang, Jingyi Zhang, Philip S. Yu

To avoid the embedding over-assimilation among different types of nodes, we employ a label-aware neural similarity measure to ascertain the most similar neighbors based on node attributes.

Fraud Detection Navigate +2

HTCInfoMax: A Global Model for Hierarchical Text Classification via Information Maximization

1 code implementation NAACL 2021 Zhongfen Deng, Hao Peng, Dongxiao He, JianXin Li, Philip S. Yu

The second one encourages the structure encoder to learn better representations with desired characteristics for all labels which can better handle label imbalance in hierarchical text classification.

General Classification Representation Learning +2

Hyperbolic Variational Graph Neural Network for Modeling Dynamic Graphs

no code implementations6 Apr 2021 Li Sun, Zhongbao Zhang, Jiawei Zhang, Feiyang Wang, Hao Peng, Sen Su, Philip S. Yu

To model the uncertainty, we devise a hyperbolic graph variational autoencoder built upon the proposed TGNN to generate stochastic node representations of hyperbolic normal distributions.

Streaming Social Event Detection and Evolution Discovery in Heterogeneous Information Networks

1 code implementation2 Apr 2021 Hao Peng, JianXin Li, Yangqiu Song, Renyu Yang, Rajiv Ranjan, Philip S. Yu, Lifang He

Third, we propose a streaming social event detection and evolution discovery framework for HINs based on meta-path similarity search, historical information about meta-paths, and heterogeneous DBSCAN clustering method.

Clustering Event Detection

Finetuning Pretrained Transformers into RNNs

1 code implementation EMNLP 2021 Jungo Kasai, Hao Peng, Yizhe Zhang, Dani Yogatama, Gabriel Ilharco, Nikolaos Pappas, Yi Mao, Weizhu Chen, Noah A. Smith

Specifically, we propose a swap-then-finetune procedure: in an off-the-shelf pretrained transformer, we replace the softmax attention with its linear-complexity recurrent alternative and then finetune.

Language Modelling Machine Translation +1

Random Feature Attention

no code implementations ICLR 2021 Hao Peng, Nikolaos Pappas, Dani Yogatama, Roy Schwartz, Noah A. Smith, Lingpeng Kong

RFA can be used as a drop-in replacement for conventional softmax attention and offers a straightforward way of learning with recency bias through an optional gating mechanism.

Language Modelling Machine Translation +3

FedMood: Federated Learning on Mobile Health Data for Mood Detection

1 code implementation6 Feb 2021 Xiaohang Xu, Hao Peng, Lichao Sun, Md Zakirul Alam Bhuiyan, Lianzhong Liu, Lifang He

Depression is one of the most common mental illness problems, and the symptoms shown by patients are not consistent, making it difficult to diagnose in the process of clinical practice and pathological research.

BIG-bench Machine Learning Depression Detection +3

Knowledge-Preserving Incremental Social Event Detection via Heterogeneous GNNs

2 code implementations21 Jan 2021 Yuwei Cao, Hao Peng, Jia Wu, Yingtong Dou, JianXin Li, Philip S. Yu

The complexity and streaming nature of social messages make it appealing to address social event detection in an incremental learning setting, where acquiring, preserving, and extending knowledge are major concerns.

Event Detection Feature Engineering +4

Heterogeneous Similarity Graph Neural Network on Electronic Health Records

no code implementations17 Jan 2021 Zheng Liu, Xiaohan Li, Hao Peng, Lifang He, Philip S. Yu

EHRs contain multiple entities and relations and can be viewed as a heterogeneous graph.

Infusing Finetuning with Semantic Dependencies

1 code implementation10 Dec 2020 Zhaofeng Wu, Hao Peng, Noah A. Smith

For natural language processing systems, two kinds of evidence support the use of text representations from neural language models "pretrained" on large unannotated corpora: performance on application-inspired benchmarks (Peters et al., 2018, inter alia), and the emergence of syntactic abstractions in those representations (Tenney et al., 2019, inter alia).

Natural Language Understanding

Hierarchical Bi-Directional Self-Attention Networks for Paper Review Rating Recommendation

1 code implementation COLING 2020 Zhongfen Deng, Hao Peng, Congying Xia, JianXin Li, Lifang He, Philip S. Yu

Review rating prediction of text reviews is a rapidly growing technology with a wide range of applications in natural language processing.

Decision Making

Learning from Context or Names? An Empirical Study on Neural Relation Extraction

1 code implementation EMNLP 2020 Hao Peng, Tianyu Gao, Xu Han, Yankai Lin, Peng Li, Zhiyuan Liu, Maosong Sun, Jie zhou

We find that (i) while context is the main source to support the predictions, RE models also heavily rely on the information from entity mentions, most of which is type information, and (ii) existing datasets may leak shallow heuristics via entity mentions and thus contribute to the high performance on RE benchmarks.

Memorization Relation Extraction

Kalman Filtering Attention for User Behavior Modeling in CTR Prediction

no code implementations NeurIPS 2020 Hu Liu, Jing Lu, Xiwei Zhao, Sulong Xu, Hao Peng, Yutong Liu, Zehua Zhang, Jian Li, Junsheng Jin, Yongjun Bao, Weipeng Yan

First, conventional attentions mostly limit the attention field only to a single user's behaviors, which is not suitable in e-commerce where users often hunt for new demands that are irrelevant to any historical behaviors.

Click-Through Rate Prediction

KG-BART: Knowledge Graph-Augmented BART for Generative Commonsense Reasoning

1 code implementation26 Sep 2020 Ye Liu, Yao Wan, Lifang He, Hao Peng, Philip S. Yu

To promote the ability of commonsense reasoning for text generation, we propose a novel knowledge graph augmented pre-trained language generation model KG-BART, which encompasses the complex relations of concepts through the knowledge graph and produces more logical and natural sentences as output.

Graph Attention Text Generation

Contextualized Perturbation for Textual Adversarial Attack

1 code implementation NAACL 2021 Dianqi Li, Yizhe Zhang, Hao Peng, Liqun Chen, Chris Brockett, Ming-Ting Sun, Bill Dolan

Adversarial examples expose the vulnerabilities of natural language processing (NLP) models, and can be used to evaluate and improve their robustness.

Adversarial Attack Language Modelling

Pairwise Learning for Name Disambiguation in Large-Scale Heterogeneous Academic Networks

no code implementations30 Aug 2020 Qingyun Sun, Hao Peng, Jian-Xin Li, Senzhang Wang, Xiangyu Dong, Liangxuan Zhao, Philip S. Yu, Lifang He

Although these attributes may change, an author's co-authors and research topics do not change frequently with time, which means that papers within a period have similar text and relation information in the academic network.

Graph Embedding

Lifelong Property Price Prediction: A Case Study for the Toronto Real Estate Market

1 code implementation12 Aug 2020 Hao Peng, Jian-Xin Li, Zheng Wang, Renyu Yang, Mingzhe Liu, Mingming Zhang, Philip S. Yu, Lifang He

As a departure from prior work, Luce organizes the house data in a heterogeneous information network (HIN) where graph nodes are house entities and attributes that are important for house price valuation.

Adversarial Directed Graph Embedding

1 code implementation9 Aug 2020 Shijie Zhu, JianXin Li, Hao Peng, Senzhang Wang, Lifang He

To capture the directed edges between nodes, existing methods mostly learn two embedding vectors for each node, source vector and target vector.

Graph Embedding Graph Mining +1

A Mixture of h - 1 Heads is Better than h Heads

no code implementations ACL 2020 Hao Peng, Roy Schwartz, Dianqi Li, Noah A. Smith

Multi-head attentive neural architectures have achieved state-of-the-art results on a variety of natural language processing tasks.

Language Modelling Machine Translation +1

Attentional Graph Convolutional Networks for Knowledge Concept Recommendation in MOOCs in a Heterogeneous View

2 code implementations23 Jun 2020 Shen Wang, Jibing Gong, Jinlong Wang, Wenzheng Feng, Hao Peng, Jie Tang, Philip S. Yu

To address this issue, we leverage both content information and context information to learn the representation of entities via graph convolution network.

Representation Learning

Deep Encoder, Shallow Decoder: Reevaluating Non-autoregressive Machine Translation

2 code implementations ICLR 2021 Jungo Kasai, Nikolaos Pappas, Hao Peng, James Cross, Noah A. Smith

We show that the speed disadvantage for autoregressive baselines compared to non-autoregressive methods has been overestimated in three aspects: suboptimal layer allocation, insufficient speed measurement, and lack of knowledge distillation.

Knowledge Distillation Machine Translation +1

Category-Specific CNN for Visual-aware CTR Prediction at

no code implementations18 Jun 2020 Hu Liu, Jing Lu, Hao Yang, Xiwei Zhao, Sulong Xu, Hao Peng, Zehua Zhang, Wenjie Niu, Xiaokun Zhu, Yongjun Bao, Weipeng Yan

Existing algorithms usually extract visual features using off-the-shelf Convolutional Neural Networks (CNNs) and late fuse the visual and non-visual features for the finally predicted CTR.

Click-Through Rate Prediction

Heuristic Semi-Supervised Learning for Graph Generation Inspired by Electoral College

1 code implementation10 Jun 2020 Chen Li, Xutan Peng, Hao Peng, Jian-Xin Li, Lihong Wang, Philip S. Yu, Lifang He

Recently, graph-based algorithms have drawn much attention because of their impressive success in semi-supervised setups.

Graph Attention Graph Generation

Multi-step-ahead Prediction from Short-term Data by Delay-embedding-based Forecast Machine

1 code implementation16 May 2020 Hao Peng, Pei Chen, Rui Liu

Making accurate multi-step-ahead prediction for a complex system is a challenge for many practical applications, especially when only short-term time-series data are available.

Time Series Time Series Analysis

A Mixture of $h-1$ Heads is Better than $h$ Heads

no code implementations13 May 2020 Hao Peng, Roy Schwartz, Dianqi Li, Noah A. Smith

Multi-head attentive neural architectures have achieved state-of-the-art results on a variety of natural language processing tasks.

Language Modelling Machine Translation +1

Alleviating the Inconsistency Problem of Applying Graph Neural Network to Fraud Detection

1 code implementation1 May 2020 Zhiwei Liu, Yingtong Dou, Philip S. Yu, Yutong Deng, Hao Peng

In this paper, we introduce these inconsistencies and design a new GNN framework, $\mathsf{GraphConsis}$, to tackle the inconsistency problem: (1) for the context inconsistency, we propose to combine the context embeddings with node features, (2) for the feature inconsistency, we design a consistency score to filter the inconsistent neighbors and generate corresponding sampling probability, and (3) for the relation inconsistency, we learn a relation attention weights associated with the sampled nodes.

Fraud Detection

Face Beautification: Beyond Makeup Transfer

1 code implementation8 Dec 2019 Xudong Liu, Ruizhe Wang, Chih-Fan Chen, Minglei Yin, Hao Peng, Shukhan Ng, Xin Li

Inspired by the latest advances in style-based synthesis and face beauty prediction, we propose a novel framework of face beautification.


Digital Twin: Acquiring High-Fidelity 3D Avatar from a Single Image

no code implementations7 Dec 2019 Ruizhe Wang, Chih-Fan Chen, Hao Peng, Xudong Liu, Oliver Liu, Xin Li

We present an approach to generate high fidelity 3D face avatar with a high-resolution UV texture map from a single image.

Face Model Vocal Bursts Intensity Prediction

RWNE: A Scalable Random-Walk-Based Network Embedding Framework with Personalized Higher-Order Proximity Preserved

1 code implementation18 Nov 2019 JianXin Li, Cheng Ji, Hao Peng, Yu He, Yangqiu Song, Xinmiao Zhang, Fanzhang Peng

However, despite the success of current random-walk-based methods, most of them are usually not expressive enough to preserve the personalized higher-order proximity and lack a straightforward objective to theoretically articulate what and how network proximity is preserved.

Network Embedding

Hierarchical Taxonomy-Aware and Attentional Graph Capsule RCNNs for Large-Scale Multi-Label Text Classification

1 code implementation9 Jun 2019 Hao Peng, Jian-Xin Li, Qiran Gong, Senzhang Wang, Lifang He, Bo Li, Lihong Wang, Philip S. Yu

In this paper, we propose a novel hierarchical taxonomy-aware and attentional graph capsule recurrent CNNs framework for large-scale multi-label text classification.

General Classification Multi Label Text Classification +3

Dynamic Network Embedding via Incremental Skip-gram with Negative Sampling

1 code implementation9 Jun 2019 Hao Peng, Jian-Xin Li, Hao Yan, Qiran Gong, Senzhang Wang, Lin Liu, Lihong Wang, Xiang Ren

Most existing methods focus on learning the structural representations of vertices in a static network, but cannot guarantee an accurate and efficient embedding in a dynamic network scenario.

Link Prediction Multi-Label Classification +1

Fine-grained Event Categorization with Heterogeneous Graph Convolutional Networks

1 code implementation9 Jun 2019 Hao Peng, Jian-Xin Li, Qiran Gong, Yangqiu Song, Yuanxing Ning, Kunfeng Lai, Philip S. Yu

In this paper, we design an event meta-schema to characterize the semantic relatedness of social events and build an event-based heterogeneous information network (HIN) integrating information from external knowledge base, and propose a novel Pair-wise Popularity Graph Convolutional Network (PP-GCN) based fine-grained social event categorization model.

Clustering Event Detection

Understanding Beauty via Deep Facial Features

no code implementations30 Jan 2019 Xudong Liu, Tao Li, Hao Peng, Iris Chuoying Ouyang, Taehwan Kim, Ruizhe Wang

The concept of beauty has been debated by philosophers and psychologists for centuries, but most definitions are subjective and metaphysical, and deficit in accuracy, generality, and scalability.

Graph Convolutional Neural Networks via Motif-based Attention

no code implementations11 Nov 2018 Hao Peng, Jian-Xin Li, Qiran Gong, Senzhang Wang, Yuanxing Ning, Philip S. Yu

Different from previous convolutional neural networks on graphs, we first design a motif-matching guided subgraph normalization method to capture neighborhood information.

General Classification Graph Classification

Modeling relation paths for knowledge base completion via joint adversarial training

1 code implementation14 Oct 2018 Chen Li, Xutan Peng, Shanghang Zhang, Hao Peng, Philip S. Yu, Min He, Linfeng Du, Lihong Wang

By treating relations and multi-hop paths as two different input sources, we use a feature extractor, which is shared by two downstream components (i. e. relation classifier and source discriminator), to capture shared/similar information between them.

Knowledge Base Completion

Rational Recurrences

1 code implementation EMNLP 2018 Hao Peng, Roy Schwartz, Sam Thomson, Noah A. Smith

We characterize this connection formally, defining rational recurrences to be recurrent hidden state update functions that can be written as the Forward calculation of a finite set of WFSAs.

Language Modelling text-classification +1

Backpropagating through Structured Argmax using a SPIGOT

1 code implementation ACL 2018 Hao Peng, Sam Thomson, Noah A. Smith

We introduce the structured projection of intermediate gradients optimization technique (SPIGOT), a new method for backpropagating through neural networks that include hard-decision structured predictions (e. g., parsing) in intermediate layers.

Dependency Parsing Semantic Dependency Parsing +2

Learning Joint Semantic Parsers from Disjoint Data

2 code implementations NAACL 2018 Hao Peng, Sam Thomson, Swabha Swayamdipta, Noah A. Smith

We present a new approach to learning semantic parsers from multiple datasets, even when the target semantic formalisms are drastically different, and the underlying corpora do not overlap.

Dependency Parsing Semantic Dependency Parsing

"You are no Jack Kennedy": On Media Selection of Highlights from Presidential Debates

no code implementations23 Feb 2018 Chenhao Tan, Hao Peng, Noah A. Smith

We first examine the effect of wording and propose a binary classification framework that controls for both the speaker and the debate situation.

Binary Classification

Improving Orbit Prediction Accuracy through Supervised Machine Learning

no code implementations15 Jan 2018 Hao Peng, Xiaoli Bai

Due to the lack of information such as the space environment condition and resident space objects' (RSOs') body characteristics, current orbit predictions that are solely grounded on physics-based models may fail to achieve required accuracy for collision avoidance and have led to satellite collisions already.

BIG-bench Machine Learning Collision Avoidance +1

Semi-supervised Structured Prediction with Neural CRF Autoencoder

1 code implementation EMNLP 2017 Xiao Zhang, Yong Jiang, Hao Peng, Kewei Tu, Dan Goldwasser

In this paper we propose an end-to-end neural CRF autoencoder (NCRF-AE) model for semi-supervised learning of sequential structured prediction problems.

Part-Of-Speech Tagging POS +1

Deep Multitask Learning for Semantic Dependency Parsing

1 code implementation ACL 2017 Hao Peng, Sam Thomson, Noah A. Smith

We present a deep neural architecture that parses sentences into three semantic dependency graph formalisms.

Dependency Parsing Semantic Dependency Parsing

A Convolutional Attention Network for Extreme Summarization of Source Code

5 code implementations9 Feb 2016 Miltiadis Allamanis, Hao Peng, Charles Sutton

Attention mechanisms in neural networks have proved useful for problems in which the input and output do not have fixed dimension.

Descriptive Extreme Summarization +1