Search Results for author: Yu-Hsiang Tseng

Found 14 papers, 1 papers with code

Augmenting Chinese WordNet semantic relations with contextualized embeddings

no code implementations GWC 2019 Yu-Hsiang Tseng, Shu-Kai Hsieh

Constructing semantic relations in WordNet has been a labour-intensive task, especially in a dynamic and fast-changing language environment.

Character Jacobian: Modeling Chinese Character Meanings with Deep Learning Model

no code implementations COLING 2022 Yu-Hsiang Tseng, Shu-Kai Hsieh

The Notch model first learns the non-linear relations between the constituents and words, and the character Jacobians further describes the character’s role in each word.

CxLM: A Construction and Context-aware Language Model

no code implementations LREC 2022 Yu-Hsiang Tseng, Cing-Fang Shih, Pin-Er Chen, Hsin-Yu Chou, Mao-Chang Ku, Shu-Kai Hsieh

Next, an experiment is conducted on the dataset to examine to what extent a pretrained masked language model is aware of the constructions.

Language Modelling

Analyzing discourse functions with acoustic features and phone embeddings: non-lexical items in Taiwan Mandarin

no code implementations ROCLING 2022 Pin-Er Chen, Yu-Hsiang Tseng, Chi-Wei Wang, Fang-Chi Yeh, Shu-Kai Hsieh

In this paper, we investigate the discourse functions of non-lexical items through their acoustic properties and the phone embeddings extracted from a deep learning model.

What confuses BERT? Linguistic Evaluation of Sentiment Analysis on Telecom Customer Opinion

no code implementations ROCLING 2021 Cing-Fang Shih, Yu-Hsiang Tseng, Ching-Wen Yang, Pin-Er Chen, Hsin-Yu Chou, Lian-Hui Tan, Tzu-Ju Lin, Chun-Wei Wang, Shu-Kai Hsieh

To investigate the factors underlying the correctness of the model’s predictions, we conduct a series of analyses, including qualitative error analysis and quantitative analysis of linguistic features with logistic regressions.

Sentence Sentiment Analysis

Word-specific tonal realizations in Mandarin

no code implementations11 May 2024 Yu-Ying Chuang, Melanie J. Bell, Yu-Hsiang Tseng, R. Harald Baayen

We then proceed to show, using computational modeling with context-specific word embeddings, that token-specific pitch contours predict word type with 50% accuracy on held-out data, and that context-sensitive, token-specific embeddings can predict the shape of pitch contours with 30% accuracy.

Word Embeddings

Resolving Regular Polysemy in Named Entities

no code implementations18 Jan 2024 Shu-Kai Hsieh, Yu-Hsiang Tseng, Hsin-Yu Chou, Ching-Wen Yang, Yu-Yun Chang

Word sense disambiguation primarily addresses the lexical ambiguity of common words based on a predefined sense inventory.

Word Sense Disambiguation

Vec2Gloss: definition modeling leveraging contextualized vectors with Wordnet gloss

no code implementations29 May 2023 Yu-Hsiang Tseng, Mao-Chang Ku, Wei-Ling Chen, Yu-Lin Chang, Shu-Kai Hsieh

We propose a `Vec2Gloss' model, which produces the gloss from the target word's contextualized embeddings.

Lexical Retrieval Hypothesis in Multimodal Context

no code implementations28 May 2023 Po-Ya Angela Wang, Pin-Er Chen, Hsin-Yu Chou, Yu-Hsiang Tseng, Shu-Kai Hsieh

This study highlights the potential of the MultiMoco Corpus to provide an important resource for in-depth analysis and further research in multimodal communication studies.


Exploring Affordance and Situated Meaning in Image Captions: A Multimodal Analysis

no code implementations24 May 2023 Pin-Er Chen, Po-Ya Angela Wang, Hsin-Yu Chou, Yu-Hsiang Tseng, Shu-Kai Hsieh

This paper explores the grounding issue regarding multimodal semantic representation from a computational cognitive-linguistic view.

Image Captioning Natural Language Understanding

Eigencharacter: An Embedding of Chinese Character Orthography

no code implementations WS 2019 Yu-Hsiang Tseng, Shu-Kai Hsieh

Chinese characters are unique in its logographic nature, which inherently encodes world knowledge through thousands of years evolution.

World Knowledge

Cannot find the paper you are looking for? You can Submit a new open access paper.