Search Results for author: Zhiguo Wang

Found 59 papers, 23 papers with code

H2KGAT: Hierarchical Hyperbolic Knowledge Graph Attention Network

no code implementations EMNLP 2020 Shen Wang, Xiaokai Wei, Cicero Nogueira dos santos, Zhiguo Wang, Ramesh Nallapati, Andrew Arnold, Bing Xiang, Philip S. Yu

Existing knowledge graph embedding approaches concentrate on modeling symmetry/asymmetry, inversion, and composition typed relations but overlook the hierarchical nature of relations.

Graph Attention Knowledge Graph Embedding +2

REKnow: Enhanced Knowledge for Joint Entity and Relation Extraction

no code implementations10 Jun 2022 Sheng Zhang, Patrick Ng, Zhiguo Wang, Bing Xiang

Our generative model is a unified framework to sequentially generate relational triplets under various relation extraction settings and explicitly utilizes relevant knowledge from Knowledge Graph (KG) to resolve ambiguities.

Joint Entity and Relation Extraction

An Unbiased Symmetric Matrix Estimator for Topology Inference under Partial Observability

no code implementations29 Mar 2022 Yupeng Chen, Zhiguo Wang, Xiaojing Shen

Network topology inference is a fundamental problem in many applications of network science, such as locating the source of fake news, brain connectivity networks detection, etc.

Towards Understanding the Impact of Model Size on Differential Private Classification

no code implementations27 Nov 2021 Yinchen Shen, Zhiguo Wang, Ruoyu Sun, Xiaojing Shen

Then we propose a feature selection method to reduce the size of the model, based on a new metric which trades off the classification accuracy and privacy preserving.

feature selection Privacy Preserving

Federated Semi-Supervised Learning with Class Distribution Mismatch

no code implementations29 Oct 2021 Zhiguo Wang, Xintong Wang, Ruoyu Sun, Tsung-Hui Chang

Similar to that encountered in federated supervised learning, class distribution of labeled/unlabeled data could be non-i. i. d.

Federated Learning

Larger Model Causes Lower Classification Accuracy Under Differential Privacy: Reason and Solution

no code implementations29 Sep 2021 Yinchen Shen, Zhiguo Wang, Ruoyu Sun, Xiaojing Shen

Differential privacy (DP) is an essential technique for privacy-preserving, which works by adding random noise to the data.

Privacy Preserving

Dual Reader-Parser on Hybrid Textual and Tabular Evidence for Open Domain Question Answering

1 code implementation ACL 2021 Alexander Hanbo Li, Patrick Ng, Peng Xu, Henghui Zhu, Zhiguo Wang, Bing Xiang

However, a large amount of world's knowledge is stored in structured databases, and need to be accessed using query languages such as SQL.

Open-Domain Question Answering

End-to-End Cross-Domain Text-to-SQL Semantic Parsing with Auxiliary Task

no code implementations17 Jun 2021 Peng Shi, Tao Yu, Patrick Ng, Zhiguo Wang

Furthermore, we propose two value filling methods to build the bridge from the existing zero-shot semantic parsers to real-world applications, considering most of the existing parsers ignore the values filling in the synthesized SQL.

Semantic Parsing Text-To-Sql

Retrieval, Re-ranking and Multi-task Learning for Knowledge-Base Question Answering

no code implementations EACL 2021 Zhiguo Wang, Patrick Ng, Ramesh Nallapati, Bing Xiang

Experiments show that: (1) Our IR-based retrieval method is able to collect high-quality candidates efficiently, thus enables our method adapt to large-scale KBs easily; (2) the BERT model improves the accuracy across all three sub-tasks; and (3) benefiting from multi-task learning, the unified model obtains further improvements with only 1/3 of the original parameters.

Entity Linking Information Retrieval +3

Learning Contextual Representations for Semantic Parsing with Generation-Augmented Pre-Training

3 code implementations18 Dec 2020 Peng Shi, Patrick Ng, Zhiguo Wang, Henghui Zhu, Alexander Hanbo Li, Jun Wang, Cicero Nogueira dos santos, Bing Xiang

Most recently, there has been significant interest in learning contextual representations for various NLP tasks, by leveraging large scale text corpora to train large neural language models with self-supervised learning objectives, such as Masked Language Model (MLM).

Language Modelling Self-Supervised Learning +2

Distributed Stochastic Consensus Optimization with Momentum for Nonconvex Nonsmooth Problems

no code implementations10 Nov 2020 Zhiguo Wang, Jiawei Zhang, Tsung-Hui Chang, Jian Li, Zhi-Quan Luo

While many distributed optimization algorithms have been proposed for solving smooth or convex problems over the networks, few of them can handle non-convex and non-smooth problems.

Distributed Optimization

Optimally Combining Classifiers for Semi-Supervised Learning

1 code implementation7 Jun 2020 Zhiguo Wang, Liusha Yang, Feng Yin, Ke Lin, Qingjiang Shi, Zhi-Quan Luo

In this paper, we find these two methods have complementary properties and larger diversity, which motivates us to propose a new semi-supervised learning method that is able to adaptively combine the strengths of Xgboost and transductive support vector machine.

Master-Auxiliary: an efficient aggregation strategy for video anomaly detection

no code implementations24 May 2020 Zhiguo Wang, Zhongliang Yang, Yu-Jin Zhang

First, the aggregation strategy chooses one detector as master detector by experience, and sets the remaining detectors as auxiliary detectors.

Anomaly Detection

Template-Based Question Generation from Retrieved Sentences for Improved Unsupervised Question Answering

1 code implementation ACL 2020 Alexander R. Fabbri, Patrick Ng, Zhiguo Wang, Ramesh Nallapati, Bing Xiang

Training a QA model on this data gives a relative improvement over a previous unsupervised model in F1 score on the SQuAD dataset by about 14%, and 20% when the answer is a named entity, achieving state-of-the-art performance on SQuAD for unsupervised QA.

Language Modelling Question Answering +1

Triplet Online Instance Matching Loss for Person Re-identification

no code implementations24 Feb 2020 Ye Li, Guangqiang Yin, Chunhui Liu, Xiaoyu Yang, Zhiguo Wang

Triplet loss processes batch construction in a complicated and fussy way and converges slowly.

Person Re-Identification

Who did They Respond to? Conversation Structure Modeling using Masked Hierarchical Transformer

1 code implementation25 Nov 2019 Henghui Zhu, Feng Nan, Zhiguo Wang, Ramesh Nallapati, Bing Xiang

In this work, we define the problem of conversation structure modeling as identifying the parent utterance(s) to which each utterance in the conversation responds to.

A Promotion Method for Generation Error Based Video Anomaly Detection

no code implementations19 Nov 2019 Zhiguo Wang, Zhongliang Yang, Yu-Jin Zhang

To address these problems, we propose a promotion method: utilize the maximum of block-level GEs on the frame to detect anomaly.

Anomaly Detection

Domain Adaptation with BERT-based Domain Classification and Data Selection

no code implementations WS 2019 Xiaofei Ma, Peng Xu, Zhiguo Wang, Ramesh Nallapati, Bing Xiang

The performance of deep neural models can deteriorate substantially when there is a domain shift between training and test data.

Classification Domain Adaptation +1

Universal Text Representation from BERT: An Empirical Study

no code implementations17 Oct 2019 Xiaofei Ma, Zhiguo Wang, Patrick Ng, Ramesh Nallapati, Bing Xiang

We present a systematic investigation of layer-wise BERT activations for general-purpose text representations to understand what linguistic information they capture and how transferable they are across different tasks.

Learning-To-Rank Natural Language Inference +3

Multi-passage BERT: A Globally Normalized BERT Model for Open-domain Question Answering

no code implementations IJCNLP 2019 Zhiguo Wang, Patrick Ng, Xiaofei Ma, Ramesh Nallapati, Bing Xiang

To tackle this issue, we propose a multi-passage BERT model to globally normalize answer scores across all passages of the same question, and this change enables our QA model find better answers by utilizing more passages.

Open-Domain Question Answering

Enhancing Key-Value Memory Neural Networks for Knowledge Based Question Answering

no code implementations NAACL 2019 Kun Xu, Yuxuan Lai, Yansong Feng, Zhiguo Wang

However, extending KV-MemNNs to Knowledge Based Question Answering (KB-QA) is not trivia, which should properly decompose a complex question into a sequence of queries against the memory, and update the query representations to support multi-hop reasoning over the memory.

Question Answering Reading Comprehension +1

Cross-lingual Knowledge Graph Alignment via Graph Matching Neural Network

2 code implementations ACL 2019 Kun Xu, Li-Wei Wang, Mo Yu, Yansong Feng, Yan Song, Zhiguo Wang, Dong Yu

Previous cross-lingual knowledge graph (KG) alignment studies rely on entity embeddings derived only from monolingual KG structural information, which may fail at matching entities that have different facts in two KGs.

Entity Embeddings Graph Attention +1

Semantic Neural Machine Translation using AMR

1 code implementation TACL 2019 Linfeng Song, Daniel Gildea, Yue Zhang, Zhiguo Wang, Jinsong Su

It is intuitive that semantic representations can be useful for machine translation, mainly because they can help in enforcing meaning preservation and handling data sparsity (many sentences correspond to one meaning) of machine translation models.

Machine Translation Translation

N-ary Relation Extraction using Graph-State LSTM

no code implementations EMNLP 2018 Linfeng Song, Yue Zhang, Zhiguo Wang, Daniel Gildea

Cross-sentence $n$-ary relation extraction detects relations among $n$ entities across multiple sentences.

Relation Extraction

SQL-to-Text Generation with Graph-to-Sequence Model

1 code implementation EMNLP 2018 Kun Xu, Lingfei Wu, Zhiguo Wang, Yansong Feng, Vadim Sheinin

Previous work approaches the SQL-to-text generation task using vanilla Seq2Seq models, which may not fully capture the inherent graph-structured information in SQL query.

Graph-to-Sequence SQL-to-Text +1

Exploring Graph-structured Passage Representation for Multi-hop Reading Comprehension with Graph Neural Networks

no code implementations6 Sep 2018 Linfeng Song, Zhiguo Wang, Mo Yu, Yue Zhang, Radu Florian, Daniel Gildea

Multi-hop reading comprehension focuses on one type of factoid question, where a system needs to properly integrate multiple pieces of evidence to correctly answer a question.

Multi-Hop Reading Comprehension Question Answering

N-ary Relation Extraction using Graph State LSTM

2 code implementations28 Aug 2018 Linfeng Song, Yue Zhang, Zhiguo Wang, Daniel Gildea

Cross-sentence $n$-ary relation extraction detects relations among $n$ entities across multiple sentences.

Relation Extraction

Exploiting Rich Syntactic Information for Semantic Parsing with Graph-to-Sequence Model

1 code implementation EMNLP 2018 Kun Xu, Lingfei Wu, Zhiguo Wang, Mo Yu, Li-Wei Chen, Vadim Sheinin

Existing neural semantic parsers mainly utilize a sequence encoder, i. e., a sequential LSTM, to extract word order features while neglecting other valuable syntactic information such as dependency graph or constituent trees.

Graph-to-Sequence Semantic Parsing

Leveraging Context Information for Natural Question Generation

1 code implementation NAACL 2018 Linfeng Song, Zhiguo Wang, Wael Hamza, Yue Zhang, Daniel Gildea

The task of natural question generation is to generate a corresponding question given the input passage (fact) and answer.

Question Generation

A Graph-to-Sequence Model for AMR-to-Text Generation

1 code implementation ACL 2018 Linfeng Song, Yue Zhang, Zhiguo Wang, Daniel Gildea

The problem of AMR-to-text generation is to recover a text representing the same meaning as an input AMR graph.

 Ranked #1 on Graph-to-Sequence on LDC2015E86: (using extra training data)

AMR-to-Text Generation Graph-to-Sequence +1

Graph2Seq: Graph to Sequence Learning with Attention-based Neural Networks

4 code implementations ICLR 2019 Kun Xu, Lingfei Wu, Zhiguo Wang, Yansong Feng, Michael Witbrock, Vadim Sheinin

Our method first generates the node and graph embeddings using an improved graph-based neural network with a novel aggregation strategy to incorporate edge direction information in the node embeddings.

Graph-to-Sequence SQL-to-Text +1

A Unified Query-based Generative Model for Question Generation and Question Answering

no code implementations4 Sep 2017 Linfeng Song, Zhiguo Wang, Wael Hamza

In the QG task, a question is generated from the system given the passage and the target answer, whereas in the QA task, the answer is generated given the question and the passage.

Question Answering Question Generation

$k$-Nearest Neighbor Augmented Neural Networks for Text Classification

no code implementations25 Aug 2017 Zhiguo Wang, Wael Hamza, Linfeng Song

However, it lacks the capacity of utilizing instance-level information from individual instances in the training set.

Classification General Classification +2

Multi-Perspective Context Matching for Machine Comprehension

1 code implementation13 Dec 2016 Zhiguo Wang, Haitao Mi, Wael Hamza, Radu Florian

Based on this dataset, we propose a Multi-Perspective Context Matching (MPCM) model, which is an end-to-end system that directly predicts the answer beginning and ending points in a passage.

Question Answering Reading Comprehension

Language Independent Dependency to Constituent Tree Conversion

no code implementations COLING 2016 Young-suk Lee, Zhiguo Wang

We present a dependency to constituent tree conversion technique that aims to improve constituent parsing accuracies by leveraging dependency treebanks available in a wide variety in many languages.

AMR-to-text generation as a Traveling Salesman Problem

no code implementations EMNLP 2016 Linfeng Song, Yue Zhang, Xiaochang Peng, Zhiguo Wang, Daniel Gildea

The task of AMR-to-text generation is to generate grammatical text that sustains the semantic meaning for a given AMR graph.

AMR-to-Text Generation Text Generation +2

Supervised Attentions for Neural Machine Translation

no code implementations EMNLP 2016 Haitao Mi, Zhiguo Wang, Abe Ittycheriah

We simply compute the distance between the machine attentions and the "true" alignments, and minimize this cost in the training procedure.

Machine Translation Translation

Vocabulary Manipulation for Neural Machine Translation

no code implementations ACL 2016 Haitao Mi, Zhiguo Wang, Abe Ittycheriah

Our method simply takes into account the translation options of each word or phrase in the source sentence, and picks a very small target vocabulary for each sentence based on a word-to-word translation model or a bilingual phrase library learned from a traditional machine translation model.

Machine Translation Translation +1

Coverage Embedding Models for Neural Machine Translation

no code implementations EMNLP 2016 Haitao Mi, Baskaran Sankaran, Zhiguo Wang, Abe Ittycheriah

In this paper, we enhance the attention-based neural machine translation (NMT) by adding explicit coverage embedding models to alleviate issues of repeating and dropping translations in NMT.

Machine Translation Translation

Sentence Similarity Learning by Lexical Decomposition and Composition

1 code implementation COLING 2016 Zhiguo Wang, Haitao Mi, Abraham Ittycheriah

Most conventional sentence similarity methods only focus on similar parts of two input sentences, and simply ignore the dissimilar parts, which usually give us some clues and semantic meanings about the sentences.

Paraphrase Identification Question Answering +1

Semi-supervised Clustering for Short Text via Deep Representation Learning

no code implementations CONLL 2016 Zhiguo Wang, Haitao Mi, Abraham Ittycheriah

In this work, we propose a semi-supervised method for short text clustering, where we represent texts as distributed vectors with neural networks, and use a small amount of labeled data to specify our intention for clustering.

Representation Learning Short Text Clustering

FAQ-based Question Answering via Word Alignment

no code implementations9 Jul 2015 Zhiguo Wang, Abraham Ittycheriah

In this paper, we propose a novel word-alignment-based method to solve the FAQ-based question answering task.

Learning-To-Rank Question Answering +2

Using a Dynamic Neural Field Model to Explore a Direct Collicular Inhibition Account of Inhibition of Return

no code implementations22 Jul 2013 Jason Satel, Ross Story, Matthew D. Hilchey, Zhiguo Wang, Raymond M. Klein

When the interval between a transient ash of light (a "cue") and a second visual response signal (a "target") exceeds at least 200ms, responding is slowest in the direction indicated by the first signal.

Large-scale Word Alignment Using Soft Dependency Cohesion Constraints

no code implementations TACL 2013 Zhiguo Wang, Cheng-qing Zong

In this paper, we take dependency cohesion as a soft constraint, and integrate it into a generative model for large-scale word alignment experiments.

Machine Translation Translation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.