Search Results for author: Tao Shen

Found 36 papers, 17 papers with code

KnowDA: All-in-One Knowledge Mixture Model for Data Augmentation in Few-Shot NLP

no code implementations21 Jun 2022 YuFei Wang, Jiayi Zheng, Can Xu, Xiubo Geng, Tao Shen, Chongyang Tao, Daxin Jiang

To combat this issue, we propose the Knowledge Mixture Data Augmentation Model (KnowDA): an encoder-decoder LM pretrained on a mixture of diverse NLP tasks using Knowledge Mixture Training (KoMT).

Data Augmentation Denoising +3

Towards Robust Ranker for Text Retrieval

no code implementations16 Jun 2022 Yucheng Zhou, Tao Shen, Xiubo Geng, Chongyang Tao, Can Xu, Guodong Long, Binxing Jiao, Daxin Jiang

A ranker plays an indispensable role in the de facto 'retrieval & rerank' pipeline, but its training still lags behind -- learning from moderate negatives or/and serving as an auxiliary module for a retriever.

Passage Retrieval

PCL: Peer-Contrastive Learning with Diverse Augmentations for Unsupervised Sentence Embeddings

no code implementations28 Jan 2022 Qiyu Wu, Chongyang Tao, Tao Shen, Can Xu, Xiubo Geng, Daxin Jiang

A straightforward solution is resorting to more diverse positives from a multi-augmenting strategy, while an open question remains about how to unsupervisedly learn from the diverse positives but with uneven augmenting qualities in the text field.

Contrastive Learning Natural Language Processing +1

Edge-Cloud Polarization and Collaboration: A Comprehensive Survey for AI

1 code implementation11 Nov 2021 Jiangchao Yao, Shengyu Zhang, Yang Yao, Feng Wang, Jianxin Ma, Jianwei Zhang, Yunfei Chu, Luo Ji, Kunyang Jia, Tao Shen, Anpeng Wu, Fengda Zhang, Ziqi Tan, Kun Kuang, Chao Wu, Fei Wu, Jingren Zhou, Hongxia Yang

However, edge computing, especially edge and cloud collaborative computing, are still in its infancy to announce their success due to the resource-constrained IoT scenarios with very limited algorithms deployed.

Edge-computing

EventBERT: A Pre-Trained Model for Event Correlation Reasoning

no code implementations13 Oct 2021 Yucheng Zhou, Xiubo Geng, Tao Shen, Guodong Long, Daxin Jiang

Event correlation reasoning infers whether a natural language paragraph containing multiple events conforms to human common sense.

Cloze Test Common Sense Reasoning +1

Hierarchical Relation-Guided Type-Sentence Alignment for Long-Tail Relation Extraction with Distant Supervision

no code implementations19 Sep 2021 Yang Li, Guodong Long, Tao Shen, Jing Jiang

It consists of (1) a pairwise type-enriched sentence encoding module injecting both context-free and -related backgrounds to alleviate sentence-level wrong labeling, and (2) a hierarchical type-sentence alignment module enriching a sentence with the triple fact's basic attributes to support long-tail relations.

Knowledge Graphs Relation Extraction +1

Sequential Diagnosis Prediction with Transformer and Ontological Representation

1 code implementation7 Sep 2021 Xueping Peng, Guodong Long, Tao Shen, Sen Wang, Jing Jiang

Sequential diagnosis prediction on the Electronic Health Record (EHR) has been proven crucial for predictive analytics in the medical domain.

Sequential Diagnosis

Financing Entrepreneurship and Innovation in China

no code implementations24 Aug 2021 Lin William Cong, Charles M. C. Lee, Yuanyu Qu, Tao Shen

This study reports on the current state-of-affairs in the funding of entrepreneurship and innovations in China and provides a broad survey of academic findings on the subject.

Federated Learning for Privacy-Preserving Open Innovation Future on Digital Health

no code implementations24 Aug 2021 Guodong Long, Tao Shen, Yue Tan, Leah Gerrard, Allison Clarke, Jing Jiang

Implementing an open innovation framework in the healthcare industry, namely open health, is to enhance innovation and creative capability of health-related organisations by building a next-generation collaborative framework with partner organisations and the research community.

Federated Learning Privacy Preserving

Multi-Center Federated Learning

1 code implementation19 Aug 2021 Ming Xie, Guodong Long, Tao Shen, Tianyi Zhou, Xianzhi Wang, Jing Jiang, Chengqi Zhang

By comparison, a mixture of multiple global models could capture the heterogeneity across various users if assigning the users to different global models (i. e., centers) in FL.

Federated Learning

Dynamic Prediction Model for NOx Emission of SCR System Based on Hybrid Data-driven Algorithms

no code implementations3 Aug 2021 Zhenhao Tang, Shikui Wang, Shengxian Cao, Yang Li, Tao Shen

Aiming at the problem that delay time is difficult to determine and prediction accuracy is low in building prediction model of SCR system, a dynamic modeling scheme based on a hybrid of multiple data-driven algorithms was proposed.

feature selection Time Series

Improving Zero-Shot Cross-lingual Transfer for Multilingual Question Answering over Knowledge Graph

no code implementations NAACL 2021 Yucheng Zhou, Xiubo Geng, Tao Shen, Wenqiang Zhang, Daxin Jiang

That is, we can only access training data in a high-resource language, while need to answer multilingual questions without any labeled data in target languages.

Bilingual Lexicon Induction Question Answering +1

Federated Graph Learning -- A Position Paper

no code implementations24 May 2021 Huanding Zhang, Tao Shen, Fei Wu, Mingyang Yin, Hongxia Yang, Chao Wu

Federated learning (FL) is a an emerging technique that can collaboratively train a shared model while keeping the data decentralized, which is a rational solution for distributed GNN training.

Federated Learning Graph Learning

EBM-Fold: Fully-Differentiable Protein Folding Powered by Energy-based Models

no code implementations11 May 2021 Jiaxiang Wu, Shitong Luo, Tao Shen, Haidong Lan, Sheng Wang, Junzhou Huang

In this paper, we propose a fully-differentiable approach for protein structure optimization, guided by a data-driven generative network.

Denoising Protein Folding +1

Federated Unsupervised Representation Learning

no code implementations18 Oct 2020 Fengda Zhang, Kun Kuang, Zhaoyang You, Tao Shen, Jun Xiao, Yin Zhang, Chao Wu, Yueting Zhuang, Xiaolin Li

FURL poses two new challenges: (1) data distribution shift (Non-IID distribution) among clients would make local models focus on different categories, leading to the inconsistency of representation spaces.

Federated Learning Representation Learning

Improving Long-Tail Relation Extraction with Collaborating Relation-Augmented Attention

2 code implementations COLING 2020 Yang Li, Tao Shen, Guodong Long, Jing Jiang, Tianyi Zhou, Chengqi Zhang

Then, facilitated by the proposed base model, we introduce collaborating relation features shared among relations in the hierarchies to promote the relation-augmenting process and balance the training data for long-tail relations.

Relation Extraction

BiteNet: Bidirectional Temporal Encoder Network to Predict Medical Outcomes

1 code implementation24 Sep 2020 Xueping Peng, Guodong Long, Tao Shen, Sen Wang, Jing Jiang, Chengqi Zhang

Electronic health records (EHRs) are longitudinal records of a patient's interactions with healthcare systems.

Federated Mutual Learning

1 code implementation27 Jun 2020 Tao Shen, Jie Zhang, Xinkang Jia, Fengda Zhang, Gang Huang, Pan Zhou, Kun Kuang, Fei Wu, Chao Wu

The experiments show that FML can achieve better performance than alternatives in typical FL setting, and clients can be benefited from FML with different models and tasks.

Federated Learning

Self-Attention Enhanced Patient Journey Understanding in Healthcare System

1 code implementation15 Jun 2020 Xueping Peng, Guodong Long, Tao Shen, Sen Wang, Jing Jiang

The key challenge of patient journey understanding is to design an effective encoding mechanism which can properly tackle the aforementioned multi-level structured patient journey data with temporal sequential visits and a set of medical codes.

Multi-Center Federated Learning

4 code implementations3 May 2020 Ming Xie, Guodong Long, Tao Shen, Tianyi Zhou, Xianzhi Wang, Jing Jiang, Chengqi Zhang

However, due to the diverse nature of user behaviors, assigning users' gradients to different global models (i. e., centers) can better capture the heterogeneity of data distributions across users.

Federated Learning

Structure-Augmented Text Representation Learning for Efficient Knowledge Graph Completion

1 code implementation30 Apr 2020 Bo Wang, Tao Shen, Guodong Long, Tianyi Zhou, Yi Chang

In experiments, we achieve state-of-the-art performance on three benchmarks and a zero-shot dataset for link prediction, with highlights of inference costs reduced by 1-2 orders of magnitude compared to a textual encoding method.

Graph Embedding Knowledge Graph Completion +2

Exploiting Structured Knowledge in Text via Graph-Guided Representation Learning

no code implementations EMNLP 2020 Tao Shen, Yi Mao, Pengcheng He, Guodong Long, Adam Trischler, Weizhu Chen

In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training, to inject language models with structured knowledge via learning from raw text.

Entity Linking Knowledge Base Completion +4

Temporal Self-Attention Network for Medical Concept Embedding

1 code implementation15 Sep 2019 Xueping Peng, Guodong Long, Tao Shen, Sen Wang, Jing Jiang, Michael Blumenstein

In this paper, we propose a medical concept embedding method based on applying a self-attention mechanism to represent each medical concept.

Tensorized Self-Attention: Efficiently Modeling Pairwise and Global Dependencies Together

2 code implementations NAACL 2019 Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Chengqi Zhang

Neural networks equipped with self-attention have parallelizable computation, light-weight structure, and the ability to capture both long-range and local dependencies.

Bi-Directional Block Self-Attention for Fast and Memory-Efficient Sequence Modeling

1 code implementation ICLR 2018 Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Chengqi Zhang

In this paper, we propose a model, called "bi-directional block self-attention network (Bi-BloSAN)", for RNN/CNN-free sequence encoding.

Reinforced Self-Attention Network: a Hybrid of Hard and Soft Attention for Sequence Modeling

1 code implementation31 Jan 2018 Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Sen Wang, Chengqi Zhang

In this paper, we integrate both soft and hard attention into one context fusion model, "reinforced self-attention (ReSA)", for the mutual benefit of each other.

Hard Attention Natural Language Inference +1

DiSAN: Directional Self-Attention Network for RNN/CNN-Free Language Understanding

1 code implementation14 Sep 2017 Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Shirui Pan, Chengqi Zhang

Recurrent neural nets (RNN) and convolutional neural nets (CNN) are widely used on NLP tasks to capture the long-term and local dependencies, respectively.

Natural Language Inference Sentence Embedding +1

Cannot find the paper you are looking for? You can Submit a new open access paper.