Search Results for author: Tian Shi

Found 10 papers, 7 papers with code

Composite learning backstepping control with guaranteed exponential stability and robustness

no code implementations19 Jan 2024 Tian Shi, Changyun Wen, Yongping Pan

This paper proposes a composite learning backstepping control (CLBC) strategy based on modular backstepping and high-order tuners to compensate for the transient process of parameter estimation and achieve closed-loop exponential stability without the nonlinear damping terms and the PE condition.

Event Detection Explorer: An Interactive Tool for Event Detection Exploration

no code implementations26 Apr 2022 Wenlong Zhang, Bhagyashree Ingale, Hamza Shabir, Tianyi Li, Tian Shi, Ping Wang

ED Explorer consists of an interactive web application, an API, and an NLP toolkit, which can help both domain experts and non-experts to better understand the ED task.

Event Detection

Attention-based Aspect Reasoning for Knowledge Base Question Answering on Clinical Notes

no code implementations1 Aug 2021 Ping Wang, Tian Shi, Khushbu Agarwal, Sutanay Choudhury, Chandan K. Reddy

On the other hand, the aspects, entity and context, limit the answers by node-specific information and lead to higher precision and lower recall.

Knowledge Base Question Answering Machine Reading Comprehension

A Simple and Effective Self-Supervised Contrastive Learning Framework for Aspect Detection

1 code implementation18 Sep 2020 Tian Shi, Liuqing Li, Ping Wang, Chandan K. Reddy

However, recent deep learning-based topic models, specifically aspect-based autoencoder, suffer from several problems, such as extracting noisy aspects and poorly mapping aspects discovered by models to the aspects of interest.

Contrastive Learning Topic Models

An Interpretable and Uncertainty Aware Multi-Task Framework for Multi-Aspect Sentiment Analysis

2 code implementations18 Sep 2020 Tian Shi, Ping Wang, Chandan K. Reddy

In addition, we also propose an Attention-driven Keywords Ranking (AKR) method, which can automatically discover aspect keywords and aspect-level opinion keywords from the review corpus based on the attention weights.

Extract Aspect Multi-Task Learning +3

Corpus-level and Concept-based Explanations for Interpretable Document Classification

1 code implementation24 Apr 2020 Tian Shi, Xuchao Zhang, Ping Wang, Chandan K. Reddy

In this paper, we propose a corpus-level explanation approach, which aims to capture causal relationships between keywords and model predictions via learning the importance of keywords for predicted labels across a training corpus based on attention weights.

Classification Decision Making +2

Text-to-SQL Generation for Question Answering on Electronic Medical Records

1 code implementation28 Jul 2019 Ping Wang, Tian Shi, Chandan K. Reddy

In this paper, we tackle these challenges by developing a deep learning based TRanslate-Edit Model for Question-to-SQL (TREQS) generation, which adapts the widely used sequence-to-sequence model to directly generate the SQL query for a given question, and further performs the required edits using an attentive-copying mechanism and task-specific look-up tables.

Information Retrieval Question Answering +2

LeafNATS: An Open-Source Toolkit and Live Demo System for Neural Abstractive Text Summarization

1 code implementation NAACL 2019 Tian Shi, Ping Wang, Chandan K. Reddy

Neural abstractive text summarization (NATS) has received a lot of attention in the past few years from both industry and academia.

Abstractive Text Summarization

Neural Abstractive Text Summarization with Sequence-to-Sequence Models

5 code implementations5 Dec 2018 Tian Shi, Yaser Keneshloo, Naren Ramakrishnan, Chandan K. Reddy

As part of this survey, we also develop an open source library, namely, Neural Abstractive Text Summarizer (NATS) toolkit, for the abstractive text summarization.

Abstractive Text Summarization Language Modelling +1

Deep Reinforcement Learning For Sequence to Sequence Models

3 code implementations24 May 2018 Yaser Keneshloo, Tian Shi, Naren Ramakrishnan, Chandan K. Reddy

In this survey, we consider seq2seq problems from the RL point of view and provide a formulation combining the power of RL methods in decision-making with sequence-to-sequence models that enable remembering long-term memories.

Abstractive Text Summarization Decision Making +4

Cannot find the paper you are looking for? You can Submit a new open access paper.