Search Results for author: Jing Jiang

Found 100 papers, 40 papers with code

Coupled Hierarchical Transformer for Stance-Aware Rumor Verification in Social Media Conversations

no code implementations EMNLP 2020 Jianfei Yu, Jing Jiang, Ling Min Serena Khoo, Hai Leong Chieu, Rui Xia

The prevalent use of social media enables rapid spread of rumors on a massive scale, which leads to the emerging need of automatic rumor verification (RV).

Multi-Task Learning Stance Classification

Digital Twins based Day-ahead Integrated Energy System Scheduling under Load and Renewable Energy Uncertainties

no code implementations29 Sep 2021 Minglei You, Qian Wang, Hongjian Sun, Ivan Castro, Jing Jiang

By constructing digital twins (DT) of an integrated energy system (IES), one can benefit from DT's predictive capabilities to improve coordinations among various energy converters, hence enhancing energy efficiency, cost savings and carbon emission reduction.

NOAHQA: Numerical Reasoning with Interpretable Graph Question Answering Dataset

1 code implementation22 Sep 2021 Qiyuan Zhang, Lei Wang, Sicheng Yu, Shuohang Wang, Yang Wang, Jing Jiang, Ee-Peng Lim

While diverse question answering (QA) datasets have been proposed and contributed significantly to the development of deep learning models for QA tasks, the existing datasets fall short in two aspects.

Graph Question Answering Question Answering

Hierarchical Relation-Guided Type-Sentence Alignment for Long-Tail Relation Extraction with Distant Supervision

no code implementations19 Sep 2021 Yang Li, Guodong Long, Tao Shen, Jing Jiang

It consists of (1) a pairwise type-enriched sentence encoding module injecting both context-free and -related backgrounds to alleviate sentence-level wrong labeling, and (2) a hierarchical type-sentence alignment module enriching a sentence with the triple fact's basic attributes to support long-tail relations.

Knowledge Graphs Relation Extraction +1

Sequential Diagnosis Prediction with Transformer and Ontological Representation

1 code implementation7 Sep 2021 Xueping Peng, Guodong Long, Tao Shen, Sen Wang, Jing Jiang

Sequential diagnosis prediction on the Electronic Health Record (EHR) has been proven crucial for predictive analytics in the medical domain.

Sequential Diagnosis

Federated Learning for Privacy-Preserving Open Innovation Future on Digital Health

no code implementations24 Aug 2021 Guodong Long, Tao Shen, Yue Tan, Leah Gerrard, Allison Clarke, Jing Jiang

Implementing an open innovation framework in the healthcare industry, namely open health, is to enhance innovation and creative capability of health-related organisations by building a next-generation collaborative framework with partner organisations and the research community.

Federated Learning

Federated Learning for Open Banking

no code implementations24 Aug 2021 Guodong Long, Yue Tan, Jing Jiang, Chengqi Zhang

In the near future, it is foreseeable to have decentralized data ownership in the finance sector using federated learning.

Federated Learning

Multi-Center Federated Learning

1 code implementation19 Aug 2021 Ming Xie, Guodong Long, Tao Shen, Tianyi Zhou, Xianzhi Wang, Jing Jiang, Chengqi Zhang

By comparison, a mixture of multiple global models could capture the heterogeneity across various users if assigning the users to different global models (i. e., centers) in FL.

Federated Learning

Disentangling Hate in Online Memes

no code implementations9 Aug 2021 Rui Cao, Ziqing Fan, Roy Ka-Wei Lee, Wen-Haw Chong, Jing Jiang

Our experiment results show that DisMultiHate is able to outperform state-of-the-art unimodal and multimodal baselines in the hateful meme classification task.

Classification Meme Classification

COSY: COunterfactual SYntax for Cross-Lingual Understanding

1 code implementation ACL 2021 Sicheng Yu, Hao Zhang, Yulei Niu, Qianru Sun, Jing Jiang

Pre-trained multilingual language models, e. g., multilingual-BERT, are widely used in cross-lingual tasks, yielding the state-of-the-art performance.

Natural Language Inference POS +1

Modeling Transitions of Focal Entities for Conversational Knowledge Base Question Answering

1 code implementation ACL 2021 Yunshi Lan, Jing Jiang

We propose a novel graph-based model to capture the transitions of focal entities and apply a graph neural network to derive a probability distribution of focal entities for each question, which is then combined with a standard KBQA module to perform answer ranking.

Knowledge Base Question Answering

Order-Agnostic Cross Entropy for Non-Autoregressive Machine Translation

1 code implementation9 Jun 2021 Cunxiao Du, Zhaopeng Tu, Jing Jiang

We propose a new training objective named order-agnostic cross entropy (OaXE) for fully non-autoregressive translation (NAT) models.

Machine Translation Translation

Investigating Math Word Problems using Pretrained Multilingual Language Models

no code implementations19 May 2021 Minghuan Tan, Lei Wang, Lingxiao Jiang, Jing Jiang

In this paper, we revisit math word problems~(MWPs) from the cross-lingual and multilingual perspective.

Machine Translation Translation

FedProto: Federated Prototype Learning over Heterogeneous Devices

1 code implementation1 May 2021 Yue Tan, Guodong Long, Lu Liu, Tianyi Zhou, Qinghua Lu, Jing Jiang, Chengqi Zhang

The heterogeneity across devices usually hinders the optimization convergence and generalization performance of federated learning (FL) when the aggregation of devices' knowledge occurs in the gradient space.

Federated Learning

Cross-Topic Rumor Detection using Topic-Mixtures

no code implementations EACL 2021 Xiaoying Ren, Jing Jiang, Ling Min Serena Khoo, Hai Leong Chieu

After deriving a vector representation for each topic, given an instance, we derive a {``}topic mixture{''} vector for the instance based on its topic distribution.

A Low-Complexity ADMM-based Massive MIMO Detectors via Deep Neural Networks

no code implementations27 Feb 2021 Isayiyas Nigatu Tiba, Quan Zhang, Jing Jiang, Yongchao Wang

An alternate direction method of multipliers (ADMM)-based detectors can achieve good performance in both small and large-scale multiple-input multiple-output (MIMO) systems.

Isometric Propagation Network for Generalized Zero-shot Learning

no code implementations ICLR 2021 Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, Xuanyi Dong, Chengqi Zhang

To resolve this problem, we propose Isometric Propagation Network (IPN), which learns to strengthen the relation between classes within each space and align the class dependency in the two spaces.

Generalized Zero-Shot Learning

Episodic memory governs choices: An RNN-based reinforcement learning model for decision-making task

no code implementations24 Jan 2021 Xiaohan Zhang, Lu Liu, Guodong Long, Jing Jiang, Shenquan Liu

Typical methods to study cognitive function are to record the electrical activities of animal neurons during the training of animals performing behavioral tasks.

Decision Making Hippocampus

PICA: A Pixel Correlation-based Attentional Black-box Adversarial Attack

no code implementations19 Jan 2021 Jie Wang, Zhaoxia Yin, Jin Tang, Jing Jiang, Bin Luo

The studies on black-box adversarial attacks have become increasingly prevalent due to the intractable acquisition of the structural knowledge of deep neural networks (DNNs).

Adversarial Attack

Attention-Guided Black-box Adversarial Attacks with Large-Scale Multiobjective Evolutionary Optimization

no code implementations19 Jan 2021 Jie Wang, Zhaoxia Yin, Jing Jiang, Yang Du

In this paper, we propose an attention-guided black-box adversarial attack based on the large-scale multiobjective evolutionary optimization, termed as LMOA.

Adversarial Attack

Improving Multi-hop Knowledge Base Question Answering by Learning Intermediate Supervision Signals

1 code implementation11 Jan 2021 Gaole He, Yunshi Lan, Jing Jiang, Wayne Xin Zhao, Ji-Rong Wen

In our approach, the student network aims to find the correct answer to the query, while the teacher network tries to learn intermediate supervision signals for improving the reasoning capacity of the student network.

Knowledge Base Question Answering Semantic Parsing

MASP: Model-Agnostic Sample Propagation for Few-shot learning

no code implementations1 Jan 2021 Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, Xuanyi Dong, Chengqi Zhang

Few-shot learning aims to train a classifier given only a few samples per class that are highly insufficient to describe the whole data distribution.

Few-Shot Learning

Extract Local Inference Chains of Deep Neural Nets

no code implementations1 Jan 2021 Haiyan Zhao, Tianyi Zhou, Guodong Long, Jing Jiang, Chengqi Zhang

In this paper, we introduce an efficient method, \name, to extract the local inference chains by optimizing a differentiable sparse scoring for the filters and layers to preserve the outputs on given data from a local region.

Interpretable Machine Learning Network Pruning

SemiNLL: A Framework of Noisy-Label Learning by Semi-Supervised Learning

no code implementations2 Dec 2020 Zhuowei Wang, Jing Jiang, Bo Han, Lei Feng, Bo An, Gang Niu, Guodong Long

We also instantiate our framework with different combinations, which set the new state of the art on benchmark-simulated and real-world datasets with noisy labels.

Learning with noisy labels

Confusable Learning for Large-class Few-Shot Classification

no code implementations6 Nov 2020 Bingcong Li, Bo Han, Zhuowei Wang, Jing Jiang, Guodong Long

Specifically, our method maintains a dynamically updating confusion matrix, which analyzes confusable classes in the dataset.

Classification Few-Shot Image Classification +2

A BERT-based Dual Embedding Model for Chinese Idiom Prediction

1 code implementation COLING 2020 Minghuan Tan, Jing Jiang

Specifically, we first match the embedding of each candidate idiom with the hidden representation corresponding to the blank in the context.

Cloze Test

Cooperative Heterogeneous Deep Reinforcement Learning

no code implementations NeurIPS 2020 Han Zheng, Pengfei Wei, Jing Jiang, Guodong Long, Qinghua Lu, Chengqi Zhang

Numerous deep reinforcement learning agents have been proposed, and each of them has its strengths and flaws.

Continuous Control

MESA: Boost Ensemble Imbalanced Learning with MEta-SAmpler

2 code implementations NeurIPS 2020 Zhining Liu, Pengfei Wei, Jing Jiang, Wei Cao, Jiang Bian, Yi Chang

This makes MESA generally applicable to most of the existing learning models and the meta-sampler can be efficiently applied to new tasks.

imbalanced classification Meta-Learning

Counterfactual Variable Control for Robust and Interpretable Question Answering

1 code implementation12 Oct 2020 Sicheng Yu, Yulei Niu, Shuohang Wang, Jing Jiang, Qianru Sun

We then conduct two novel CVC inference methods (on trained models) to capture the effect of comprehensive reasoning as the final prediction.

Causal Inference Multiple choice QA +1

Improving Long-Tail Relation Extraction with Collaborating Relation-Augmented Attention

2 code implementations COLING 2020 Yang Li, Tao Shen, Guodong Long, Jing Jiang, Tianyi Zhou, Chengqi Zhang

Then, facilitated by the proposed base model, we introduce collaborating relation features shared among relations in the hierarchies to promote the relation-augmenting process and balance the training data for long-tail relations.

Relation Extraction

Cross-Thought for Sentence Encoder Pre-training

1 code implementation EMNLP 2020 Shuohang Wang, Yuwei Fang, Siqi Sun, Zhe Gan, Yu Cheng, Jing Jiang, Jingjing Liu

In this paper, we propose Cross-Thought, a novel approach to pre-training sequence encoder, which is instrumental in building reusable sequence embeddings for large-scale NLP tasks such as question answering.

Information Retrieval Language Modelling +2

Context Modeling with Evidence Filter for Multiple Choice Question Answering

no code implementations6 Oct 2020 Sicheng Yu, Hao Zhang, Wei Jing, Jing Jiang

In addition to the effective reduction of human efforts of our approach compared, through extensive experiments on OpenbookQA, we show that the proposed approach outperforms the models that use the same backbone and more training data; and our parameter analysis also demonstrates the interpretability of our approach.

Machine Reading Comprehension Question Answering

Attribute Propagation Network for Graph Zero-shot Learning

no code implementations24 Sep 2020 Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, Chengqi Zhang

To address this challenging task, most ZSL methods relate unseen test classes to seen(training) classes via a pre-defined set of attributes that can describe all classes in the same semantic space, so the knowledge learned on the training classes can be adapted to unseen classes.

Meta-Learning Zero-Shot Learning

BiteNet: Bidirectional Temporal Encoder Network to Predict Medical Outcomes

1 code implementation24 Sep 2020 Xueping Peng, Guodong Long, Tao Shen, Sen Wang, Jing Jiang, Chengqi Zhang

Electronic health records (EHRs) are longitudinal records of a patient's interactions with healthcare systems.

Improving Multimodal Named Entity Recognition via Entity Span Detection with Unified Multimodal Transformer

no code implementations ACL 2020 Jianfei Yu, Jing Jiang, Li Yang, Rui Xia

To tackle the first issue, we propose a multimodal interaction module to obtain both image-aware word representations and word-aware visual representations.

Named Entity Recognition

Query Graph Generation for Answering Multi-hop Complex Questions from Knowledge Bases

no code implementations ACL 2020 Yunshi Lan, Jing Jiang

Previous work on answering complex questions from knowledge bases usually separately addresses two types of complexity: questions with constraints and questions with multiple hops of relations.

Graph Generation

Many-Class Few-Shot Learning on Multi-Granularity Class Hierarchy

1 code implementation28 Jun 2020 Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, Chengqi Zhang

We study many-class few-shot (MCFS) problem in both supervised learning and meta-learning settings.

Few-Shot Learning

Self-Attention Enhanced Patient Journey Understanding in Healthcare System

1 code implementation15 Jun 2020 Xueping Peng, Guodong Long, Tao Shen, Sen Wang, Jing Jiang

The key challenge of patient journey understanding is to design an effective encoding mechanism which can properly tackle the aforementioned multi-level structured patient journey data with temporal sequential visits and a set of medical codes.

Connecting the Dots: Multivariate Time Series Forecasting with Graph Neural Networks

3 code implementations24 May 2020 Zonghan Wu, Shirui Pan, Guodong Long, Jing Jiang, Xiaojun Chang, Chengqi Zhang

Modeling multivariate time series has long been a subject that has attracted researchers from a diverse range of fields including economics, finance, and traffic.

Graph Learning Multivariate Time Series Forecasting +2

Online Non-convex Learning for River Pollution Source Identification

no code implementations22 May 2020 Wenjie Huang, Jing Jiang, Xiao Liu

In this paper, novel gradient based online learning algorithms are developed to investigate an important environmental application: real-time river pollution source identification, which aims at estimating the released mass, the location and the released time of a river pollution source based on downstream sensor data monitoring the pollution concentration.

Multi-Center Federated Learning

4 code implementations3 May 2020 Ming Xie, Guodong Long, Tao Shen, Tianyi Zhou, Xianzhi Wang, Jing Jiang, Chengqi Zhang

However, due to the diverse nature of user behaviors, assigning users' gradients to different global models (i. e., centers) can better capture the heterogeneity of data distributions across users.

Federated Learning

Aspect and Opinion Aware Abstractive Review Summarization with Reinforced Hard Typed Decoder

no code implementations13 Apr 2020 Yufei Tian, Jianfei Yu, Jing Jiang

In this paper, we study abstractive review summarization. Observing that review summaries often consist of aspect words, opinion words and context words, we propose a two-stage reinforcement learning approach, which first predicts the output word type from the three types, and then leverages the predicted word type to generate the final word distribution. Experimental results on two Amazon product review datasets demonstrate that our method can consistently outperform several strong baseline approaches based on ROUGE scores.

Rethinking 1D-CNN for Time Series Classification: A Stronger Baseline

2 code implementations24 Feb 2020 Wensi Tang, Guodong Long, Lu Liu, Tianyi Zhou, Jing Jiang, Michael Blumenstein

For time series classification task using 1D-CNN, the selection of kernel size is critically important to ensure the model can capture the right scale salient signal from a long time-series.

Classification General Classification +2

Interpretable Rumor Detection in Microblogs by Attending to User Interactions

1 code implementation29 Jan 2020 Ling Min Serena Khoo, Hai Leong Chieu, Zhong Qian, Jing Jiang

We propose a post-level attention model (PLAN) to model long distance interactions between tweets with the multi-head attention mechanism in a transformer network.

Multimodal Story Generation on Plural Images

no code implementations16 Jan 2020 Jing Jiang

In this work, we propose the architecture to use images instead of text as the input of the text generation model, called StoryGen.

Story Generation

What does BERT Learn from Multiple-Choice Reading Comprehension Datasets?

no code implementations28 Oct 2019 Chenglei Si, Shuohang Wang, Min-Yen Kan, Jing Jiang

Based on our experiments on the 5 key MCRC datasets - RACE, MCTest, MCScript, MCScript2. 0, DREAM - we observe that 1) fine-tuned BERT mainly learns how keywords lead to correct prediction, instead of learning semantic understanding and reasoning; and 2) BERT does not need correct syntactic information to solve the task; 3) there exists artifacts in these datasets such that they can be solved even without the full context.

Reading Comprehension

Temporal Self-Attention Network for Medical Concept Embedding

1 code implementation15 Sep 2019 Xueping Peng, Guodong Long, Tao Shen, Sen Wang, Jing Jiang, Michael Blumenstein

In this paper, we propose a medical concept embedding method based on applying a self-attention mechanism to represent each medical concept.

Learning to Propagate for Graph Meta-Learning

1 code implementation NeurIPS 2019 Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, Chengqi Zhang

It can significantly improve tasks that suffer from insufficient training data, e. g., few shot learning.

Few-Shot Image Classification

Graph WaveNet for Deep Spatial-Temporal Graph Modeling

7 code implementations31 May 2019 Zonghan Wu, Shirui Pan, Guodong Long, Jing Jiang, Chengqi Zhang

Spatial-temporal graph modeling is an important task to analyze the spatial relations and temporal trends of components in a system.

Traffic Prediction

MahiNet: A Neural Network for Many-Class Few-Shot Learning with Class Hierarchy

no code implementations ICLR 2019 Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, Chengqi Zhang

It addresses the ``many-class'' problem by exploring the class hierarchy, e. g., the coarse-class label that covers a subset of fine classes, which helps to narrow down the candidates for the fine class and is cheaper to obtain.

Few-Shot Learning General Classification

DAGCN: Dual Attention Graph Convolutional Networks

1 code implementation4 Apr 2019 Fengwen Chen, Shirui Pan, Jing Jiang, Huan Huo, Guodong Long

In this paper, we propose a novel framework called, dual attention graph convolutional networks (DAGCN) to address these problems.

General Classification Graph Classification +1

Learning Graph Embedding with Adversarial Training Methods

no code implementations4 Jan 2019 Shirui Pan, Ruiqi Hu, Sai-fu Fung, Guodong Long, Jing Jiang, Chengqi Zhang

Based on this framework, we derive two variants of adversarial models, the adversarially regularized graph autoencoder (ARGA) and its variational version, adversarially regularized variational graph autoencoder (ARVGA), to learn the graph embedding effectively.

Graph Clustering Graph Embedding +2

Learning Private Neural Language Modeling with Attentive Aggregation

3 code implementations17 Dec 2018 Shaoxiong Ji, Shirui Pan, Guodong Long, Xue Li, Jing Jiang, Zi Huang

Federated learning (FL) provides a promising approach to learning private language modeling for intelligent personalized keyboard suggestion by training models in distributed clients rather than training in a central server.

Federated Learning Language Modelling

Did you take the pill? - Detecting Personal Intake of Medicine from Twitter

no code implementations3 Aug 2018 Debanjan Mahata, Jasper Friedrichs, Rajiv Ratn Shah, Jing Jiang

We believe that the developed classifier has direct uses in the areas of psychology, health informatics, pharmacovigilance and affective computing for tracking moods, emotions and sentiments of patients expressing intake of medicine in social media.

Embedding WordNet Knowledge for Textual Entailment

no code implementations COLING 2018 Yunshi Lan, Jing Jiang

In this paper, we study how we can improve a deep learning approach to textual entailment by incorporating lexical entailment relations from WordNet.

Feature Engineering Lexical Entailment +1

A Co-Matching Model for Multi-choice Reading Comprehension

1 code implementation ACL 2018 Shuohang Wang, Mo Yu, Shiyu Chang, Jing Jiang

Multi-choice reading comprehension is a challenging task, which involves the matching between a passage and a question-answer pair.

Reading Comprehension

Tensorized Self-Attention: Efficiently Modeling Pairwise and Global Dependencies Together

2 code implementations NAACL 2019 Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Chengqi Zhang

Neural networks equipped with self-attention have parallelizable computation, light-weight structure, and the ability to capture both long-range and local dependencies.

Bi-Directional Block Self-Attention for Fast and Memory-Efficient Sequence Modeling

1 code implementation ICLR 2018 Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Chengqi Zhang

In this paper, we propose a model, called "bi-directional block self-attention network (Bi-BloSAN)", for RNN/CNN-free sequence encoding.

Adversarially Regularized Graph Autoencoder for Graph Embedding

4 code implementations13 Feb 2018 Shirui Pan, Ruiqi Hu, Guodong Long, Jing Jiang, Lina Yao, Chengqi Zhang

Graph embedding is an effective method to represent graph data in a low dimensional space for graph analytics.

 Ranked #1 on Graph Clustering on Cora (F1 metric)

Graph Clustering Graph Embedding +1

Reinforced Self-Attention Network: a Hybrid of Hard and Soft Attention for Sequence Modeling

1 code implementation31 Jan 2018 Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Sen Wang, Chengqi Zhang

In this paper, we integrate both soft and hard attention into one context fusion model, "reinforced self-attention (ReSA)", for the mutual benefit of each other.

Natural Language Inference

Modelling Domain Relationships for Transfer Learning on Retrieval-based Question Answering Systems in E-commerce

1 code implementation23 Nov 2017 Jianfei Yu, Minghui Qiu, Jing Jiang, Jun Huang, Shuangyong Song, Wei Chu, Haiqing Chen

In this paper, we study transfer learning for the PI and NLI problems, aiming to propose a general framework, which can effectively and efficiently adapt the shared knowledge learned from a resource-rich source domain to a resource- poor target domain.

Chatbot Natural Language Inference +3

Leveraging Auxiliary Tasks for Document-Level Cross-Domain Sentiment Classification

no code implementations IJCNLP 2017 Jianfei Yu, Jing Jiang

In this paper, we study domain adaptation with a state-of-the-art hierarchical neural network for document-level sentiment classification.

Classification Denoising +6

DiSAN: Directional Self-Attention Network for RNN/CNN-Free Language Understanding

1 code implementation14 Sep 2017 Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Shirui Pan, Chengqi Zhang

Recurrent neural nets (RNN) and convolutional neural nets (CNN) are widely used on NLP tasks to capture the long-term and local dependencies, respectively.

Natural Language Inference Sentence Embedding

Can Syntax Help? Improving an LSTM-based Sentence Compression Model for New Domains

no code implementations ACL 2017 Liangguo Wang, Jing Jiang, Hai Leong Chieu, Chen Hui Ong, D. Song, an, Lejian Liao

In this paper, we study how to improve the domain adaptability of a deletion-based Long Short-Term Memory (LSTM) neural network model for sentence compression.

Sentence Compression Tokenization

A Compare-Aggregate Model for Matching Text Sequences

2 code implementations6 Nov 2016 Shuohang Wang, Jing Jiang

We particularly focus on the different comparison functions we can use to match two vectors.

Answer Selection Reading Comprehension

Cannot find the paper you are looking for? You can Submit a new open access paper.