1 code implementation • 19 May 2022 • Jungo Kasai, Keisuke Sakaguchi, Ronan Le Bras, Hao Peng, Ximing Lu, Dragomir Radev, Yejin Choi, Noah A. Smith
Natural language generation technology has recently seen remarkable progress with large-scale training, and many natural language applications are now built upon a wide range of generation models.
1 code implementation • 3 May 2022 • Yuwei Cao, William Groves, Tanay Kumar Saha, Joel R. Tetreault, Alex Jaimes, Hao Peng, Philip S. Yu
To date, work in this area has mostly focused on English as there is a scarcity of labeled data for other languages.
1 code implementation • 26 Apr 2022 • Kay Liu, Yingtong Dou, Yue Zhao, Xueying Ding, Xiyang Hu, Ruitong Zhang, Kaize Ding, Canyu Chen, Hao Peng, Kai Shu, George H. Chen, Zhihao Jia, Philip S. Yu
PyGOD is an open-source Python library for detecting outliers on graph data.
no code implementations • 18 Mar 2022 • Xusheng Zhao, Jia Wu, Hao Peng, Amin Beheshti, Jessica Monaghan, David Mcalpine, Heivet Hernandez-Perez, Mark Dras, Qiong Dai, Yangyang Li, Philip S. Yu, Lifang He
Modern neuroimaging techniques, such as diffusion tensor imaging (DTI) and functional magnetic resonance imaging (fMRI), enable us to model the human brain as a brain network or connectome.
1 code implementation • 3 Mar 2022 • JianXin Li, Xingcheng Fu, Qingyun Sun, Cheng Ji, Jiajun Tan, Jia Wu, Hao Peng
In this paper, we proposed a novel Curvature Graph Generative Adversarial Networks method, named \textbf{\modelname}, which is the first GAN-based graph representation method in the Riemannian geometric manifold.
1 code implementation • 17 Jan 2022 • Yixin Liu, Yu Zheng, Daokun Zhang, Hongxu Chen, Hao Peng, Shirui Pan
To solve the unsupervised GSL problem, we propose a novel StrUcture Bootstrapping contrastive LearnIng fraMEwork (SUBLIME for abbreviation) with the aid of self-supervised contrastive learning.
1 code implementation • 16 Jan 2022 • Ziwei Fan, Zhiwei Liu, Alice Wang, Zahra Nazari, Lei Zheng, Hao Peng, Philip S. Yu
We further argue that BPR loss has no constraint on positive and sampled negative items, which misleads the optimization.
no code implementations • 21 Dec 2021 • Hao Peng, Hang Li, Lei Hou, Juanzi Li, chao qiao
We also develop a dataset for the problem using an existing MKB.
1 code implementation • 16 Dec 2021 • Qingyun Sun, JianXin Li, Hao Peng, Jia Wu, Xingcheng Fu, Cheng Ji, Philip S. Yu
Graph Neural Networks (GNNs) have shown promising results on a broad spectrum of applications.
no code implementations • 10 Dec 2021 • Li Sun, Zhongbao Zhang, Junda Ye, Hao Peng, Jiawei Zhang, Sen Su, Philip S. Yu
Instead of working on one single constant-curvature space, we construct a mixed-curvature space via the Cartesian product of multiple Riemannian component spaces and design hierarchical attention mechanisms for learning and fusing the representations across these component spaces.
1 code implementation • TIST 2021 2021 • Haoyi Zhou, Hao Peng, Jieqi Peng, Shuai Zhang, JianXin Li
Extensive experiments are conducted on five large-scale datasets, which demonstrate that our method achieves state-of-the-art performance and validates the effectiveness brought by local structure information.
no code implementations • 28 Nov 2021 • Xiaohan Li, Zhiwei Liu, Stephen Guo, Zheng Liu, Hao Peng, Philip S. Yu, Kannan Achan
In this paper, we propose a novel Reinforced Attentive Multi-relational Graph Neural Network (RAM-GNN) to the pre-train user and item embeddings on the user and item graph prior to the recommendation step.
no code implementations • 21 Nov 2021 • Jun Yu, Zhaoming Kong, Aditya Kendre, Hao Peng, Carl Yang, Lichao Sun, Alex Leow, Lifang He
This paper presents a novel graph-based kernel learning approach for connectome analysis.
no code implementations • 21 Nov 2021 • Zhiwei Liu, Liangwei Yang, Ziwei Fan, Hao Peng, Philip S. Yu
However, they all require centralized storage of the social links and item interactions of users, which leads to privacy concerns.
no code implementations • 20 Nov 2021 • Yizhen Zheng, Ming Jin, Shirui Pan, Yuan-Fang Li, Hao Peng, Ming Li, Zhao Li
To overcome the aforementioned problems, in light of the recent advancements in graph contrastive learning, we introduce a novel self-supervised graph representation learning algorithm via Graph Contrastive Adjusted Zooming, namely G-Zoom, to learn node representations by leveraging the proposed adjusted zooming scheme.
1 code implementation • 15 Oct 2021 • Xingcheng Fu, JianXin Li, Jia Wu, Qingyun Sun, Cheng Ji, Senzhang Wang, Jiajun Tan, Hao Peng, Philip S. Yu
Hyperbolic Graph Neural Networks(HGNNs) extend GNNs to hyperbolic space and thus are more effective to capture the hierarchical structures of graphs in node representation learning.
no code implementations • 10 Oct 2021 • Hao Peng, Guofeng Tong, Zheng Li, Yaqi Wang, Yuyuan Shao
The SGNet proposed in this paper has achieved state-of-the-art results for 3D object detection in the KITTI dataset, especially in the detection of small-size objects such as cyclists.
no code implementations • ACL 2022 • Hao Peng, Jungo Kasai, Nikolaos Pappas, Dani Yogatama, Zhaofeng Wu, Lingpeng Kong, Roy Schwartz, Noah A. Smith
One way to improve the efficiency is to bound the memory size.
no code implementations • 2 Sep 2021 • Hongyin Zhu, Hao Peng, Zhiheng Lyu, Lei Hou, Juanzi Li, Jinghui Xiao
To obtain the aforementioned multi-format text, we construct a corpus in the tourism domain and conduct experiments on 5 tourism NLP datasets.
no code implementations • 23 Aug 2021 • Qian Li, Shu Guo, Jia Wu, JianXin Li, Jiawei Sheng, Lihong Wang, Xiaohan Dong, Hao Peng
It ignores meaningful associations among event types and argument roles, leading to relatively poor performance for less frequent types/roles.
1 code implementation • 6 Aug 2021 • Jiaqian Ren, Hao Peng, Lei Jiang, Jia Wu, Yongxin Tong, Lihong Wang, Xu Bai, Bo wang, Qiang Yang
Experiments on both synthetic and real-world datasets show the framework to be highly effective at detection in both multilingual data and in languages where training samples are scarce.
1 code implementation • 31 Jul 2021 • Zhaoming Kong, Lichao Sun, Hao Peng, Liang Zhan, Yong Chen, Lifang He
In this paper, we propose MGNet, a simple and effective multiplex graph convolutional network (GCN) model for multimodal brain network analysis.
1 code implementation • ACL 2022 • Alexis Ross, Tongshuang Wu, Hao Peng, Matthew E. Peters, Matt Gardner
We craft a set of operations to modify the control codes, which in turn steer generation towards targeted attributes.
no code implementations • 5 Jul 2021 • Qian Li, JianXin Li, Jiawei Sheng, Shiyao Cui, Jia Wu, Yiming Hei, Hao Peng, Shu Guo, Lihong Wang, Amin Beheshti, Philip S. Yu
Numerous methods, datasets, and evaluation metrics have been proposed in the literature, raising the need for a comprehensive and updated survey.
no code implementations • 3 Jul 2021 • Hao Peng, Pei Chen, Rui Liu, Luonan Chen
Making predictions in a robust way is not easy for nonlinear systems.
1 code implementation • 23 Jun 2021 • Qian Li, Hao Peng, JianXin Li, Jia Wu, Yuanxing Ning, Lihong Wang, Philip S. Yu, Zheng Wang
Our approach leverages knowledge of the already extracted arguments of the same sentence to determine the role of arguments that would be difficult to decide individually.
1 code implementation • 6 Jun 2021 • Qianren Mao, Xi Li, Hao Peng, Bang Liu, Shu Guo, JianXin Li, Lihong Wang, Philip S. Yu
The code and datasets are available at \url{https://github. com/OpenSUM/HashtagGen}.
no code implementations • 29 May 2021 • Xi Li, Qianren Mao, Hao Peng, Hongdong Zhu, JianXin Li, Zheng Wang
This paper presents a better TLS approach for automatically and dynamically determining the TLS timeline length.
no code implementations • 28 May 2021 • Junnan Liu, Qianren Mao, Bang Liu, Hao Peng, Hongdong Zhu, JianXin Li
In this paper, we argue that this limitation can be overcome by a semi-supervised approach: consistency training which is to leverage large amounts of unlabeled data to improve the performance of supervised learning over a small corpus.
1 code implementation • 22 May 2021 • JianXin Li, Xingcheng Fu, Hao Peng, Senzhang Wang, Shijie Zhu, Qingyun Sun, Philip S. Yu, Lifang He
With the prevalence of graph data in real-world applications, many methods have been proposed in recent years to learn high-quality graph embedding vectors various types of graphs.
1 code implementation • 17 May 2021 • Hao Peng, Haoran Li, Yangqiu Song, Vincent Zheng, JianXin Li
However, for multiple cross-domain knowledge graphs, state-of-the-art embedding models cannot make full use of the data from different knowledge domains while preserving the privacy of exchanged data.
1 code implementation • 7 May 2021 • Gongxu Luo, JianXin Li, Jianlin Su, Hao Peng, Carl Yang, Lichao Sun, Philip S. Yu, Lifang He
Based on them, we design MinGE to directly calculate the ideal node embedding dimension for any graph.
no code implementations • 4 May 2021 • Sicong Che, Hao Peng, Lichao Sun, Yong Chen, Lifang He
This paper aims to provide a generic Federated Multi-View Learning (FedMV) framework for multi-view data leakage prevention, which is based on different types of local data availability and enables to accommodate two types of problems: Vertical Federated Multi-View Learning (V-FedMV) and Horizontal Federated Multi-View Learning (H-FedMV).
1 code implementation • 16 Apr 2021 • JianXin Li, Hao Peng, Yuwei Cao, Yingtong Dou, Hekai Zhang, Philip S. Yu, Lifang He
Furthermore, they cannot fully capture the content-based correlations between nodes, as they either do not use the self-attention mechanism or only use it to consider the immediate neighbors of each node, ignoring the higher-order neighbors.
1 code implementation • 16 Apr 2021 • Hao Peng, Ruitong Zhang, Yingtong Dou, Renyu Yang, Jingyi Zhang, Philip S. Yu
To avoid the embedding over-assimilation among different types of nodes, we employ a label-aware neural similarity measure to ascertain the most similar neighbors based on node attributes.
Ranked #2 on
Node Classification
on Amazon-Fraud
1 code implementation • NAACL 2021 • Zhongfen Deng, Hao Peng, Dongxiao He, JianXin Li, Philip S. Yu
The second one encourages the structure encoder to learn better representations with desired characteristics for all labels which can better handle label imbalance in hierarchical text classification.
no code implementations • 6 Apr 2021 • Li Sun, Zhongbao Zhang, Jiawei Zhang, Feiyang Wang, Hao Peng, Sen Su, Philip S. Yu
To model the uncertainty, we devise a hyperbolic graph variational autoencoder built upon the proposed TGNN to generate stochastic node representations of hyperbolic normal distributions.
1 code implementation • 2 Apr 2021 • Hao Peng, JianXin Li, Yangqiu Song, Renyu Yang, Rajiv Ranjan, Philip S. Yu, Lifang He
Third, we propose a streaming social event detection and evolution discovery framework for HINs based on meta-path similarity search, historical information about meta-paths, and heterogeneous DBSCAN clustering method.
1 code implementation • EMNLP 2021 • Jungo Kasai, Hao Peng, Yizhe Zhang, Dani Yogatama, Gabriel Ilharco, Nikolaos Pappas, Yi Mao, Weizhu Chen, Noah A. Smith
Specifically, we propose a swap-then-finetune procedure: in an off-the-shelf pretrained transformer, we replace the softmax attention with its linear-complexity recurrent alternative and then finetune.
Ranked #1 on
Machine Translation
on WMT2017 Chinese-English
no code implementations • 16 Mar 2021 • Yiying Yang, Xi Yin, Haiqin Yang, Xingjian Fei, Hao Peng, Kaijie Zhou, Kunfeng Lai, Jianping Shen
Entity synonyms discovery is crucial for entity-leveraging applications.
no code implementations • ICLR 2021 • Hao Peng, Nikolaos Pappas, Dani Yogatama, Roy Schwartz, Noah A. Smith, Lingpeng Kong
RFA can be used as a drop-in replacement for conventional softmax attention and offers a straightforward way of learning with recency bias through an optional gating mechanism.
Ranked #22 on
Machine Translation
on IWSLT2014 German-English
1 code implementation • 6 Feb 2021 • Xiaohang Xu, Hao Peng, Lichao Sun, Md Zakirul Alam Bhuiyan, Lianzhong Liu, Lifang He
Depression is one of the most common mental illness problems, and the symptoms shown by patients are not consistent, making it difficult to diagnose in the process of clinical practice and pathological research.
2 code implementations • 21 Jan 2021 • Yuwei Cao, Hao Peng, Jia Wu, Yingtong Dou, JianXin Li, Philip S. Yu
The complexity and streaming nature of social messages make it appealing to address social event detection in an incremental learning setting, where acquiring, preserving, and extending knowledge are major concerns.
1 code implementation • 20 Jan 2021 • Qingyun Sun, JianXin Li, Hao Peng, Jia Wu, Yuanxing Ning, Phillip S. Yu, Lifang He
Graph representation learning has attracted increasing research attention.
no code implementations • 17 Jan 2021 • Zheng Liu, Xiaohan Li, Hao Peng, Lifang He, Philip S. Yu
EHRs contain multiple entities and relations and can be viewed as a heterogeneous graph.
1 code implementation • 10 Dec 2020 • Zhaofeng Wu, Hao Peng, Noah A. Smith
For natural language processing systems, two kinds of evidence support the use of text representations from neural language models "pretrained" on large unannotated corpora: performance on application-inspired benchmarks (Peters et al., 2018, inter alia), and the emergence of syntactic abstractions in those representations (Tenney et al., 2019, inter alia).
1 code implementation • COLING 2020 • Zhongfen Deng, Hao Peng, Congying Xia, JianXin Li, Lifang He, Philip S. Yu
Review rating prediction of text reviews is a rapidly growing technology with a wide range of applications in natural language processing.
1 code implementation • EMNLP 2020 • Hao Peng, Tianyu Gao, Xu Han, Yankai Lin, Peng Li, Zhiyuan Liu, Maosong Sun, Jie zhou
We find that (i) while context is the main source to support the predictions, RE models also heavily rely on the information from entity mentions, most of which is type information, and (ii) existing datasets may leak shallow heuristics via entity mentions and thus contribute to the high performance on RE benchmarks.
Ranked #17 on
Relation Extraction
on TACRED
no code implementations • NeurIPS 2020 • Hu Liu, Jing Lu, Xiwei Zhao, Sulong Xu, Hao Peng, Yutong Liu, Zehua Zhang, Jian Li, Junsheng Jin, Yongjun Bao, Weipeng Yan
First, conventional attentions mostly limit the attention field only to a single user's behaviors, which is not suitable in e-commerce where users often hunt for new demands that are irrelevant to any historical behaviors.
1 code implementation • 26 Sep 2020 • Ye Liu, Yao Wan, Lifang He, Hao Peng, Philip S. Yu
To promote the ability of commonsense reasoning for text generation, we propose a novel knowledge graph augmented pre-trained language generation model KG-BART, which encompasses the complex relations of concepts through the knowledge graph and produces more logical and natural sentences as output.
1 code implementation • NAACL 2021 • Dianqi Li, Yizhe Zhang, Hao Peng, Liqun Chen, Chris Brockett, Ming-Ting Sun, Bill Dolan
Adversarial examples expose the vulnerabilities of natural language processing (NLP) models, and can be used to evaluate and improve their robustness.
no code implementations • 30 Aug 2020 • Qingyun Sun, Hao Peng, Jian-Xin Li, Senzhang Wang, Xiangyu Dong, Liangxuan Zhao, Philip S. Yu, Lifang He
Although these attributes may change, an author's co-authors and research topics do not change frequently with time, which means that papers within a period have similar text and relation information in the academic network.
3 code implementations • 19 Aug 2020 • Yingtong Dou, Zhiwei Liu, Li Sun, Yutong Deng, Hao Peng, Philip S. Yu
Finally, the selected neighbors across different relations are aggregated together.
Ranked #4 on
Node Classification
on Amazon-Fraud
no code implementations • 12 Aug 2020 • Hao Peng, Jian-Xin Li, Zheng Wang, Renyu Yang, Mingzhe Liu, Mingming Zhang, Philip S. Yu, Lifang He
As a departure from prior work, Luce organizes the house data in a heterogeneous information network (HIN) where graph nodes are house entities and attributes that are important for house price valuation.
1 code implementation • 9 Aug 2020 • Shijie Zhu, JianXin Li, Hao Peng, Senzhang Wang, Lifang He
To capture the directed edges between nodes, existing methods mostly learn two embedding vectors for each node, source vector and target vector.
2 code implementations • 2 Aug 2020 • Qian Li, Hao Peng, Jian-Xin Li, Congying Xia, Renyu Yang, Lichao Sun, Philip S. Yu, Lifang He
The last decade has seen a surge of research in this area due to the unprecedented success of deep learning.
no code implementations • ACL 2020 • Hao Peng, Roy Schwartz, Dianqi Li, Noah A. Smith
Multi-head attentive neural architectures have achieved state-of-the-art results on a variety of natural language processing tasks.
2 code implementations • 23 Jun 2020 • Shen Wang, Jibing Gong, Jinlong Wang, Wenzheng Feng, Hao Peng, Jie Tang, Philip S. Yu
To address this issue, we leverage both content information and context information to learn the representation of entities via graph convolution network.
2 code implementations • ICLR 2021 • Jungo Kasai, Nikolaos Pappas, Hao Peng, James Cross, Noah A. Smith
We show that the speed disadvantage for autoregressive baselines compared to non-autoregressive methods has been overestimated in three aspects: suboptimal layer allocation, insufficient speed measurement, and lack of knowledge distillation.
no code implementations • 18 Jun 2020 • Hu Liu, Jing Lu, Hao Yang, Xiwei Zhao, Sulong Xu, Hao Peng, Zehua Zhang, Wenjie Niu, Xiaokun Zhu, Yongjun Bao, Weipeng Yan
Existing algorithms usually extract visual features using off-the-shelf Convolutional Neural Networks (CNNs) and late fuse the visual and non-visual features for the finally predicted CTR.
1 code implementation • 10 Jun 2020 • Chen Li, Xutan Peng, Hao Peng, Jian-Xin Li, Lihong Wang, Philip S. Yu, Lifang He
Recently, graph-based algorithms have drawn much attention because of their impressive success in semi-supervised setups.
1 code implementation • 16 May 2020 • Hao Peng, Pei Chen, Rui Liu
Making accurate multi-step-ahead prediction for a complex system is a challenge for many practical applications, especially when only short-term time-series data are available.
no code implementations • 13 May 2020 • Hao Peng, Roy Schwartz, Dianqi Li, Noah A. Smith
Multi-head attentive neural architectures have achieved state-of-the-art results on a variety of natural language processing tasks.
1 code implementation • 1 May 2020 • Zhiwei Liu, Yingtong Dou, Philip S. Yu, Yutong Deng, Hao Peng
In this paper, we introduce these inconsistencies and design a new GNN framework, $\mathsf{GraphConsis}$, to tackle the inconsistency problem: (1) for the context inconsistency, we propose to combine the context embeddings with node features, (2) for the feature inconsistency, we design a consistency score to filter the inconsistent neighbors and generate corresponding sampling probability, and (3) for the relation inconsistency, we learn a relation attention weights associated with the sampled nodes.
no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Xu Han, Tianyu Gao, Yankai Lin, Hao Peng, Yaoliang Yang, Chaojun Xiao, Zhiyuan Liu, Peng Li, Maosong Sun, Jie zhou
Relational facts are an important component of human knowledge, which are hidden in vast amounts of text.
1 code implementation • 8 Dec 2019 • Xudong Liu, Ruizhe Wang, Chih-Fan Chen, Minglei Yin, Hao Peng, Shukhan Ng, Xin Li
Inspired by the latest advances in style-based synthesis and face beauty prediction, we propose a novel framework of face beautification.
no code implementations • 7 Dec 2019 • Ruizhe Wang, Chih-Fan Chen, Hao Peng, Xudong Liu, Oliver Liu, Xin Li
We present an approach to generate high fidelity 3D face avatar with a high-resolution UV texture map from a single image.
1 code implementation • 18 Nov 2019 • JianXin Li, Cheng Ji, Hao Peng, Yu He, Yangqiu Song, Xinmiao Zhang, Fanzhang Peng
However, despite the success of current random-walk-based methods, most of them are usually not expressive enough to preserve the personalized higher-order proximity and lack a straightforward objective to theoretically articulate what and how network proximity is preserved.
no code implementations • 7 Sep 2019 • Yu He, Yangqiu Song, Jian-Xin Li, Cheng Ji, Jian Peng, Hao Peng
Heterogeneous information network (HIN) embedding has gained increasing interests recently.
1 code implementation • IJCNLP 2019 • Jesse Dodge, Roy Schwartz, Hao Peng, Noah A. Smith
Our method also highlights the interpretable properties of rational RNNs.
1 code implementation • IJCNLP 2019 • Hao Peng, Roy Schwartz, Noah A. Smith
We present PaLM, a hybrid parser and neural language model.
1 code implementation • 9 Jun 2019 • Hao Peng, Jian-Xin Li, Qiran Gong, Senzhang Wang, Lifang He, Bo Li, Lihong Wang, Philip S. Yu
In this paper, we propose a novel hierarchical taxonomy-aware and attentional graph capsule recurrent CNNs framework for large-scale multi-label text classification.
1 code implementation • 9 Jun 2019 • Hao Peng, Jian-Xin Li, Qiran Gong, Yangqiu Song, Yuanxing Ning, Kunfeng Lai, Philip S. Yu
In this paper, we design an event meta-schema to characterize the semantic relatedness of social events and build an event-based heterogeneous information network (HIN) integrating information from external knowledge base, and propose a novel Pair-wise Popularity Graph Convolutional Network (PP-GCN) based fine-grained social event categorization model.
1 code implementation • 9 Jun 2019 • Hao Peng, Jian-Xin Li, Hao Yan, Qiran Gong, Senzhang Wang, Lin Liu, Lihong Wang, Xiang Ren
Most existing methods focus on learning the structural representations of vertices in a static network, but cannot guarantee an accurate and efficient embedding in a dynamic network scenario.
no code implementations • NAACL 2019 • Hao Peng, Ankur P. Parikh, Manaal Faruqui, Bhuwan Dhingra, Dipanjan Das
We propose a novel conditioned text generation model.
no code implementations • 30 Jan 2019 • Xudong Liu, Tao Li, Hao Peng, Iris Chuoying Ouyang, Taehwan Kim, Ruizhe Wang
The concept of beauty has been debated by philosophers and psychologists for centuries, but most definitions are subjective and metaphysical, and deficit in accuracy, generality, and scalability.
no code implementations • 11 Nov 2018 • Hao Peng, Jian-Xin Li, Qiran Gong, Senzhang Wang, Yuanxing Ning, Philip S. Yu
Different from previous convolutional neural networks on graphs, we first design a motif-matching guided subgraph normalization method to capture neighborhood information.
1 code implementation • 14 Oct 2018 • Chen Li, Xutan Peng, Shanghang Zhang, Hao Peng, Philip S. Yu, Min He, Linfeng Du, Lihong Wang
By treating relations and multi-hop paths as two different input sources, we use a feature extractor, which is shared by two downstream components (i. e. relation classifier and source discriminator), to capture shared/similar information between them.
1 code implementation • EMNLP 2018 • Hao Peng, Roy Schwartz, Sam Thomson, Noah A. Smith
We characterize this connection formally, defining rational recurrences to be recurrent hidden state update functions that can be written as the Forward calculation of a finite set of WFSAs.
no code implementations • 16 Jun 2018 • Chao Yang, Taehwan Kim, Ruizhe Wang, Hao Peng, C. -C. Jay Kuo
It has been applied to numerous domains, such as data augmentation, domain adaptation, and unsupervised training.
1 code implementation • ACL 2018 • Hao Peng, Sam Thomson, Noah A. Smith
We introduce the structured projection of intermediate gradients optimization technique (SPIGOT), a new method for backpropagating through neural networks that include hard-decision structured predictions (e. g., parsing) in intermediate layers.
2 code implementations • NAACL 2018 • Hao Peng, Sam Thomson, Swabha Swayamdipta, Noah A. Smith
We present a new approach to learning semantic parsers from multiple datasets, even when the target semantic formalisms are drastically different, and the underlying corpora do not overlap.
no code implementations • 23 Feb 2018 • Chenhao Tan, Hao Peng, Noah A. Smith
We first examine the effect of wording and propose a binary classification framework that controls for both the speaker and the debate situation.
no code implementations • 15 Jan 2018 • Hao Peng, Xiaoli Bai
Due to the lack of information such as the space environment condition and resident space objects' (RSOs') body characteristics, current orbit predictions that are solely grounded on physics-based models may fail to achieve required accuracy for collision avoidance and have led to satellite collisions already.
1 code implementation • EMNLP 2017 • Xiao Zhang, Yong Jiang, Hao Peng, Kewei Tu, Dan Goldwasser
In this paper we propose an end-to-end neural CRF autoencoder (NCRF-AE) model for semi-supervised learning of sequential structured prediction problems.
no code implementations • ICML 2017 • Hao Peng, Shandian Zhe, Yuan Qi
Gaussian processes (GPs) are powerful non-parametric function estimators.
1 code implementation • ACL 2017 • Hao Peng, Sam Thomson, Noah A. Smith
We present a deep neural architecture that parses sentences into three semantic dependency graph formalisms.
5 code implementations • 9 Feb 2016 • Miltiadis Allamanis, Hao Peng, Charles Sutton
Attention mechanisms in neural networks have proved useful for problems in which the input and output do not have fixed dimension.
no code implementations • EMNLP 2015 • Hao Peng, Lili Mou, Ge Li, Yunchuan Chen, Yangyang Lu, Zhi Jin
This paper aims to compare different regularization strategies to address a common phenomenon, severe overfitting, in embedding-based neural networks for NLP.
no code implementations • 15 Aug 2015 • Xu Yan, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng, Zhi Jin
Relation classification is an important research arena in the field of natural language processing (NLP).
Ranked #4 on
Relation Classification
on SemEval 2010 Task 8
no code implementations • EMNLP 2015 • Lili Mou, Hao Peng, Ge Li, Yan Xu, Lu Zhang, Zhi Jin
This paper proposes a tree-based convolutional neural network (TBCNN) for discriminative sentence modeling.
Ranked #6 on
Text Classification
on TREC-6
1 code implementation • 11 Sep 2014 • Lili Mou, Ge Li, Yuxuan Liu, Hao Peng, Zhi Jin, Yan Xu, Lu Zhang
In this pioneering paper, we propose the "coding criterion" to build program vector representations, which are the premise of deep learning for program analysis.
1 code implementation • 2 Jan 2014 • Hao Peng, Yuan Qi
In this paper, we propose a new Bayesian approach, EigenGP, that learns both basis dictionary elements--eigenfunctions of a GP prior--and prior precisions in a sparse finite model.