Search Results for author: Yu-Ping Ruan

Found 13 papers, 2 papers with code

RedCore: Relative Advantage Aware Cross-modal Representation Learning for Missing Modalities with Imbalanced Missing Rates

no code implementations16 Dec 2023 Jun Sun, Xinxin Zhang, Shoukang Han, Yu-Ping Ruan, Taihao Li

Multimodal learning is susceptible to modality missing, which poses a major obstacle for its practical applications and, thus, invigorates increasing research interest.

Representation Learning

Parameter-Efficient Tuning on Layer Normalization for Pre-trained Language Models

no code implementations16 Nov 2022 Wang Qi, Yu-Ping Ruan, Yuan Zuo, Taihao Li

Conventional fine-tuning encounters increasing difficulties given the size of current Pre-trained Language Models, which makes parameter-efficient tuning become the focal point of frontier research.

Fast sensor placement by enlarging principle submatrix for large-scale linear inverse problems

no code implementations6 Oct 2021 Fen Wang, Gene Cheung, Taihao Li, Ying Du, Yu-Ping Ruan

Sensor placement for linear inverse problems is the selection of locations to assign sensors so that the entire physical signal can be well recovered from partial observations.

SemEval-2021 Task 4: Reading Comprehension of Abstract Meaning

1 code implementation SEMEVAL 2021 Boyuan Zheng, Xiaoyu Yang, Yu-Ping Ruan, ZhenHua Ling, Quan Liu, Si Wei, Xiaodan Zhu

Given a passage and the corresponding question, a participating system is expected to choose the correct answer from five candidates of abstract concepts in a cloze-style machine reading comprehension setup.

Machine Reading Comprehension

Emotion-Regularized Conditional Variational Autoencoder for Emotional Response Generation

no code implementations18 Apr 2021 Yu-Ping Ruan, Zhen-Hua Ling

This paper presents an emotion-regularized conditional variational autoencoder (Emo-CVAE) model for generating emotional conversation responses.

Response Generation

Condition-Transforming Variational AutoEncoder for Conversation Response Generation

no code implementations24 Apr 2019 Yu-Ping Ruan, Zhen-Hua Ling, Quan Liu, Zhigang Chen, Nitin Indurkhya

This paper proposes a new model, called condition-transforming variational autoencoder (CTVAE), to improve the performance of conversation response generation using conditional variational autoencoders (CVAEs).

Response Generation

Exploring Unsupervised Pretraining and Sentence Structure Modelling for Winograd Schema Challenge

no code implementations22 Apr 2019 Yu-Ping Ruan, Xiaodan Zhu, Zhen-Hua Ling, Zhan Shi, Quan Liu, Si Wei

Winograd Schema Challenge (WSC) was proposed as an AI-hard problem in testing computers' intelligence on common sense representation and reasoning.

Common Sense Reasoning Sentence

Promoting Diversity for End-to-End Conversation Response Generation

no code implementations27 Jan 2019 Yu-Ping Ruan, Zhen-Hua Ling, Quan Liu, Jia-Chen Gu, Xiaodan Zhu

At this stage, two different models are proposed, i. e., a variational generative (VariGen) model and a retrieval based (Retrieval) model.

Response Generation Retrieval

A Sequential Neural Encoder with Latent Structured Description for Modeling Sentences

no code implementations15 Nov 2017 Yu-Ping Ruan, Qian Chen, Zhen-Hua Ling

The description layer utilizes modified LSTM units to process these chunk-level vectors in a recurrent manner and produces sequential encoding outputs.

Chunking Natural Language Inference +3

Cannot find the paper you are looking for? You can Submit a new open access paper.