Search Results for author: Shang-Wen Li

Found 29 papers, 12 papers with code

Self-Supervised Speech Representation Learning: A Review

no code implementations21 May 2022 Abdelrahman Mohamed, Hung-Yi Lee, Lasse Borgholt, Jakob D. Havtorn, Joakim Edin, Christian Igel, Katrin Kirchhoff, Shang-Wen Li, Karen Livescu, Lars Maaløe, Tara N. Sainath, Shinji Watanabe

Although self-supervised speech representation is still a nascent research area, it is closely related to acoustic word embedding and learning with zero lexical resources, both of which have seen active research for many years.

Automatic Speech Recognition Representation Learning

Meta Learning for Natural Language Processing: A Survey

no code implementations3 May 2022 Hung-Yi Lee, Shang-Wen Li, Ngoc Thang Vu

Deep learning has been the mainstream technique in natural language processing (NLP) area.

Meta-Learning

An Exploration of Prompt Tuning on Generative Spoken Language Model for Speech Processing Tasks

1 code implementation31 Mar 2022 Kai-Wei Chang, Wei-Cheng Tseng, Shang-Wen Li, Hung-Yi Lee

We report in this paper the first exploration of the prompt tuning paradigm for speech processing tasks based on Generative Spoken Language Model (GSLM).

Language Modelling Self-Supervised Learning

Listen, Adapt, Better WER: Source-free Single-utterance Test-time Adaptation for Automatic Speech Recognition

1 code implementation27 Mar 2022 Guan-Ting Lin, Shang-Wen Li, Hung-Yi Lee

Although deep learning-based end-to-end Automatic Speech Recognition (ASR) has shown remarkable performance in recent years, it suffers severe performance regression on test samples drawn from different data distributions.

Automatic Speech Recognition

QaNER: Prompting Question Answering Models for Few-shot Named Entity Recognition

no code implementations3 Mar 2022 Andy T. Liu, Wei Xiao, Henghui Zhu, Dejiao Zhang, Shang-Wen Li, Andrew Arnold

Recently, prompt-based learning for pre-trained language models has succeeded in few-shot Named Entity Recognition (NER) by exploiting prompts as task guidance to increase label efficiency.

few-shot-ner Few-shot NER +2

Pairwise Supervised Contrastive Learning of Sentence Representations

1 code implementation EMNLP 2021 Dejiao Zhang, Shang-Wen Li, Wei Xiao, Henghui Zhu, Ramesh Nallapati, Andrew O. Arnold, Bing Xiang

Many recent successes in sentence representation learning have been achieved by simply fine-tuning on the Natural Language Inference (NLI) datasets with triplet loss or siamese loss.

Contrastive Learning Natural Language Inference +2

Meta-learning for downstream aware and agnostic pretraining

no code implementations6 Jun 2021 Hongyin Luo, Shuyan Dong, Yung-Sung Chuang, Shang-Wen Li

Neural network pretraining is gaining attention due to its outstanding performance in natural language processing applications.

Meta-Learning

Cooperative Learning of Zero-Shot Machine Reading Comprehension

no code implementations12 Mar 2021 Hongyin Luo, Shang-Wen Li, Seunghak Yu, James Glass

REGEX is built upon a masked answer extraction task with an interactive learning environment containing an answer entity REcognizer, a question Generator, and an answer EXtractor.

Machine Reading Comprehension Pretrained Language Models +4

Knowledge Grounded Conversational Symptom Detection with Graph Memory Networks

no code implementations EMNLP (ClinicalNLP) 2020 Hongyin Luo, Shang-Wen Li, James Glass

Given a set of explicit symptoms provided by the patient to initiate a dialog for diagnosing, the system is trained to collect implicit symptoms by asking questions, in order to collect more information for making an accurate diagnosis.

Goal-Oriented Dialog

Educational Content Linking for Enhancing Learning Need Remediation in MOOCs

no code implementations31 Dec 2020 Shang-Wen Li

By linking and organizing pieces of learning content scattered in various course materials into an easily accessible structure, we hypothesize that this framework can provide learners guidance and improve content navigation.

Towards Semi-Supervised Semantics Understanding from Speech

no code implementations11 Nov 2020 Cheng-I Lai, Jin Cao, Sravan Bodapati, Shang-Wen Li

Much recent work on Spoken Language Understanding (SLU) falls short in at least one of three ways: models were trained on oracle text input and neglected the Automatics Speech Recognition (ASR) outputs, models were trained to predict only intents without the slot values, or models were trained on a large amount of in-house data.

Speech Recognition Spoken Language Understanding

Semi-Supervised Spoken Language Understanding via Self-Supervised Speech and Language Model Pretraining

1 code implementation26 Oct 2020 Cheng-I Lai, Yung-Sung Chuang, Hung-Yi Lee, Shang-Wen Li, James Glass

Much recent work on Spoken Language Understanding (SLU) is limited in at least one of three ways: models were trained on oracle text input and neglected ASR errors, models were trained to predict only intents without the slot values, or models were trained on a large amount of in-house data.

Language Modelling Spoken Language Understanding

Style Attuned Pre-training and Parameter Efficient Fine-tuning for Spoken Language Understanding

no code implementations9 Oct 2020 Jin Cao, Jun Wang, Wael Hamza, Kelly Vanee, Shang-Wen Li

The light encoder architecture separates the shared pre-trained networks from the mappings of generally encoded knowledge to specific domains of SLU, allowing for the domain adaptation to be performed solely at the light encoder and thus increasing efficiency.

Domain Adaptation Language Modelling +1

Prototypical Q Networks for Automatic Conversational Diagnosis and Few-Shot New Disease Adaption

no code implementations19 May 2020 Hongyin Luo, Shang-Wen Li, James Glass

Experiments showed that the ProtoQN significantly outperformed the baseline DQN model in both supervised and few-shot learning scenarios, and achieves state-of-the-art few-shot learning performances.

Few-Shot Learning

A Coarse-to-Fine Indoor Layout Estimation (CFILE) Method

1 code implementation3 Jul 2016 Yuzhuo Ren, Chen Chen, Shang-Wen Li, C. -C. Jay Kuo

The task of estimating the spatial layout of cluttered indoor scenes from a single RGB image is addressed in this work.

GAL: A Global-Attributes Assisted Labeling System for Outdoor Scenes

no code implementations3 Apr 2016 Yuzhuo Ren, Chen Chen, Shang-Wen Li, C. -C. Jay Kuo

The proposed Global-attributes Assisted Labeling (GAL) system exploits both local features and global attributes.

Measuring and Predicting Tag Importance for Image Retrieval

no code implementations28 Feb 2016 Shang-Wen Li, Sanjay Purushotham, Chen Chen, Yuzhuo Ren, C. -C. Jay Kuo

Textual data such as tags, sentence descriptions are combined with visual cues to reduce the semantic gap for image retrieval applications in today's Multimodal Image Retrieval (MIR) systems.

Image Retrieval TAG

Cannot find the paper you are looking for? You can Submit a new open access paper.