Search Results for author: Chengyuan Ma

Found 10 papers, 2 papers with code

KEPLET: Knowledge-Enhanced Pretrained Language Model with Topic Entity Awareness

no code implementations2 May 2023 Yichuan Li, Jialong Han, Kyumin Lee, Chengyuan Ma, Benjamin Yao, Derek Liu

In recent years, Pre-trained Language Models (PLMs) have shown their superiority by pre-training on unstructured text corpus and then fine-tuning on downstream tasks.

Entity Linking Language Modelling +3

CLICKER: Attention-Based Cross-Lingual Commonsense Knowledge Transfer

no code implementations26 Feb 2023 Ruolin Su, Zhongkai Sun, Sixing Lu, Chengyuan Ma, Chenlei Guo

Recent advances in cross-lingual commonsense reasoning (CSR) are facilitated by the development of multilingual pre-trained models (mPTMs).

Question Answering Transfer Learning

Query Expansion and Entity Weighting for Query Reformulation Retrieval in Voice Assistant Systems

no code implementations22 Feb 2022 Zhongkai Sun, Sixing Lu, Chengyuan Ma, Xiaohu Liu, Chenlei Guo

However, these methods rarely focus on query expansion and entity weighting simultaneously, which may limit the scope and accuracy of the query reformulation retrieval.

Retrieval

Incremental user embedding modeling for personalized text classification

no code implementations13 Feb 2022 Ruixue Lian, Che-Wei Huang, Yuqing Tang, Qilong Gu, Chengyuan Ma, Chenlei Guo

Individual user profiles and interaction histories play a significant role in providing customized experiences in real-world applications such as chatbots, social media, retail, and education.

Management Multi-class Classification +3

VAE based Text Style Transfer with Pivot Words Enhancement Learning

1 code implementation ICON 2021 Haoran Xu, Sixing Lu, Zhongkai Sun, Chengyuan Ma, Chenlei Guo

Text Style Transfer (TST) aims to alter the underlying style of the source text to another specific style while keeping the same content.

Style Transfer Text Style Transfer

LSTM-based Whisper Detection

no code implementations20 Sep 2018 Zeynab Raeesy, Kellen Gillespie, Zhenpei Yang, Chengyuan Ma, Thomas Drugman, Jiacheng Gu, Roland Maas, Ariya Rastrow, Björn Hoffmeister

We prove that, with enough data, the LSTM model is indeed as capable of learning whisper characteristics from LFBE features alone compared to a simpler MLP model that uses both LFBE and features engineered for separating whisper and normal speech.

Benchmarking

Cannot find the paper you are looking for? You can Submit a new open access paper.