Search Results for author: Lei Sha

Found 30 papers, 5 papers with code

Gradient-guided Unsupervised Lexically Constrained Text Generation

no code implementations EMNLP 2020 Lei Sha

Previous works usually apply beam-search-based methods or stochastic searching methods to lexically-constrained generation.

Paraphrase Generation Sentence

ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable Safety Detectors

1 code implementation26 Feb 2024 Zhexin Zhang, Yida Lu, Jingyuan Ma, Di Zhang, Rui Li, Pei Ke, Hao Sun, Lei Sha, Zhifang Sui, Hongning Wang, Minlie Huang

The safety of Large Language Models (LLMs) has gained increasing attention in recent years, but there still lacks a comprehensive approach for detecting safety issues within LLMs' responses in an aligned, customizable and explainable manner.

From Noise to Clarity: Unraveling the Adversarial Suffix of Large Language Model Attacks via Translation of Text Embeddings

no code implementations25 Feb 2024 Hao Wang, Hao Li, Minlie Huang, Lei Sha

The safety defense methods of Large language models(LLMs) stays limited because the dangerous prompts are manually curated to just few known attack types, which fails to keep pace with emerging varieties.

Language Modelling Large Language Model

Harnessing the Plug-and-Play Controller by Prompting

no code implementations6 Feb 2024 Hao Wang, Lei Sha

The proposed approach aims to enhance the fluency of generated text by guiding the generation process with PPCs.

Attribute Language Modelling +1

Text Attribute Control via Closed-Loop Disentanglement

no code implementations1 Dec 2023 Lei Sha, Thomas Lukasiewicz

In this approach, we use a semi-supervised contrastive learning method to encourage the disentanglement of attributes in latent spaces.

Attribute Contrastive Learning +2

Rationalizing Predictions by Adversarial Information Calibration

no code implementations15 Jan 2023 Lei Sha, Oana-Maria Camburu, Thomas Lukasiewicz

One form of explanation for a prediction is an extractive rationale, i. e., a subset of features of an instance that lead the model to give its prediction on that instance.

Language Modelling Sentiment Analysis +2

Bird-Eye Transformers for Text Generation Models

1 code implementation8 Oct 2022 Lei Sha, Yuhang Song, Yordan Yordanov, Tommaso Salvatori, Thomas Lukasiewicz

Transformers have become an indispensable module for text generation models since their great success in machine translation.

Attribute Inductive Bias +3

Rationale production to support clinical decision-making

no code implementations15 Nov 2021 Niall Taylor, Lei Sha, Dan W Joyce, Thomas Lukasiewicz, Alejo Nevado-Holgado, Andrey Kormilitzin

In this work, we apply InfoCal, the current state-of-the-art model that produces extractive rationales for its predictions, to the task of predicting hospital readmission using hospital discharge notes.

Decision Making Feature Importance

RecInDial: A Unified Framework for Conversational Recommendation with Pretrained Language Models

no code implementations14 Oct 2021 Lingzhi Wang, Huang Hu, Lei Sha, Can Xu, Kam-Fai Wong, Daxin Jiang

Furthermore, we propose to evaluate the CRS models in an end-to-end manner, which can reflect the overall performance of the entire system rather than the performance of individual modules, compared to the separate evaluations of the two modules used in previous work.

Dialogue Generation Language Modelling +1

Unifying Categorical Models by Explicit Disentanglement of the Labels' Generative Factors

no code implementations29 Sep 2021 Luca Pinchetti, Lei Sha, Thomas Lukasiewicz

By doing so, it is possible to merge multiple datasets based on different categorical models by projecting the data points into a unified latent space.

Disentanglement Emotion Recognition

Associative Memories via Predictive Coding

no code implementations NeurIPS 2021 Tommaso Salvatori, Yuhang Song, Yujian Hong, Simon Frieder, Lei Sha, Zhenghua Xu, Rafal Bogacz, Thomas Lukasiewicz

We conclude by discussing the possible impact of this work in the neuroscience community, by showing that our model provides a plausible framework to study learning and retrieval of memories in the brain, as it closely mimics the behavior of the hippocampus as a memory index and generative model.

Hippocampus Retrieval

Learning from the Best: Rationalizing Prediction by Adversarial Information Calibration

no code implementations16 Dec 2020 Lei Sha, Oana-Maria Camburu, Thomas Lukasiewicz

We use an adversarial-based technique to calibrate the information extracted by the two models such that the difference between them is an indicator of the missed or over-selected features.

Language Modelling Sentiment Analysis

Multi-type Disentanglement without Adversarial Training

no code implementations16 Dec 2020 Lei Sha, Thomas Lukasiewicz

After the latent space is disentangled, the style of a sentence can be transformed by tuning the style representation without affecting other features of the sentence.

Disentanglement Interpretable Machine Learning +3

Table-to-text Generation by Structure-aware Seq2seq Learning

3 code implementations27 Nov 2017 Tianyu Liu, Kexiang Wang, Lei Sha, Baobao Chang, Zhifang Sui

In the decoding phase, dual attention mechanism which contains word level attention and field level attention is proposed to model the semantic relevance between the generated description and the table.

Table-to-Text Generation

Syntax Aware LSTM model for Semantic Role Labeling

no code implementations WS 2017 Feng Qian, Lei Sha, Baobao Chang, Lu-chen Liu, Ming Zhang

In Semantic Role Labeling (SRL) task, the tree structured dependency relation is rich in syntax information, but it is not well handled by existing models.

Feature Engineering Machine Translation +4

Order-Planning Neural Text Generation From Structured Data

1 code implementation1 Sep 2017 Lei Sha, Lili Mou, Tianyu Liu, Pascal Poupart, Sujian Li, Baobao Chang, Zhifang Sui

Generating texts from structured data (e. g., a table) is important for various natural language processing tasks such as question answering and dialog systems.

Question Answering Table-to-Text Generation

Syntax Aware LSTM Model for Chinese Semantic Role Labeling

no code implementations3 Apr 2017 Feng Qian, Lei Sha, Baobao Chang, Lu-chen Liu, Ming Zhang

As for semantic role labeling (SRL) task, when it comes to utilizing parsing information, both traditional methods and recent recurrent neural network (RNN) based methods use the feature engineering way.

Chinese Semantic Role Labeling Dependency Parsing +2

Reading and Thinking: Re-read LSTM Unit for Textual Entailment Recognition

no code implementations COLING 2016 Lei Sha, Baobao Chang, Zhifang Sui, Sujian Li

After read the premise again, the model can get a better understanding of the premise, which can also affect the understanding of the hypothesis.

Information Retrieval Machine Translation +4

Towards Time-Aware Knowledge Graph Completion

no code implementations COLING 2016 Tingsong Jiang, Tianyu Liu, Tao Ge, Lei Sha, Baobao Chang, Sujian Li, Zhifang Sui

In this paper, we present a novel time-aware knowledge graph completion model that is able to predict links in a KG using both the existing facts and the temporal information of the facts.

Question Answering Relation Extraction +1

Joint Learning Templates and Slots for Event Schema Induction

no code implementations NAACL 2016 Lei Sha, Sujian Li, Baobao Chang, Zhifang Sui

Automatic event schema induction (AESI) means to extract meta-event from raw text, in other words, to find out what types (templates) of event may exist in the raw text and what roles (slots) may exist in each event type.

Image Segmentation Semantic Segmentation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.