no code implementations • FNP (LREC) 2022 • Bo Peng, Emmanuele Chersoni, Yu-Yin Hsu, Chu-Ren Huang
With the rising popularity of Transformer-based language models, several studies have tried to exploit their masked language modeling capabilities to automatically extract relational linguistic knowledge, although this kind of research has rarely investigated semantic relations in specialized domains.
no code implementations • ROCLING 2021 • Liang-Chih Yu, Jin Wang, Bo Peng, Chu-Ren Huang
This paper presents the ROCLING 2021 shared task on dimensional sentiment analysis for educational texts which seeks to identify a real-value sentiment score of self-evaluation comments written by Chinese students in the both valence and arousal dimensions.
no code implementations • CL (ACL) 2021 • Emmanuele Chersoni, Enrico Santus, Chu-Ren Huang, Alessandro Lenci
For each probing task, we identify the most relevant semantic features and we show that there is a correlation between the embedding performance and how they encode those features.
no code implementations • LChange (ACL) 2022 • Jing Chen, Emmanuele Chersoni, Chu-Ren Huang
Recent research has brought a wind of using computational approaches to the classic topic of semantic change, aiming to tackle one of the most challenging issues in the evolution of human language.
no code implementations • EMNLP (ECONLP) 2021 • Bo Peng, Emmanuele Chersoni, Yu-Yin Hsu, Chu-Ren Huang
With the recent rise in popularity of Transformer models in Natural Language Processing, research efforts have been dedicated to the development of domain-adapted versions of BERT-like architectures.
no code implementations • CSRNLP (LREC) 2022 • Lu Lu, Jinghang Gu, Chu-Ren Huang
Inclusion, as one of the foundations in the diversity, equity, and inclusion initiative, concerns the degree of being treated as an ingroup member in a workplace.
no code implementations • CSRNLP (LREC) 2022 • Jieyu Chen, Kathleen Ahrens, Chu-Ren Huang
The BUILDING source domain was used more often as gain frames in both Chinese and American CSR reports to show how oil companies create benefits for different stakeholders.
no code implementations • SIGDIAL (ACL) 2021 • Andreas Liesenfeld, Gabor Parti, Chu-Ren Huang
We present Scikit-talk, an open-source toolkit for processing collections of real-world conversational speech in Python.
no code implementations • 9 Oct 2022 • Yueyue Huang, Chu-Ren Huang
This study attempts to investigate cross-strait variations on two typical synonymous loanwords in Chinese, i. e. xie2shang1 and tan2pan4, drawn on MARVS theory.
no code implementations • 15 Nov 2021 • Siyu Lei, Ruiying Yang, Chu-Ren Huang
This study attempts to extract micro-level linguistic features in high- and moderate-impact journal RAs, using feature engineering methods.
no code implementations • SEMEVAL 2021 • Rong Xiang, Jinghang Gu, Emmanuele Chersoni, Wenjie Li, Qin Lu, Chu-Ren Huang
In this contribution, we describe the system presented by the PolyU CBS-Comp Team at the Task 1 of SemEval 2021, where the goal was the estimation of the complexity of words in a given sentence context.
no code implementations • PACLIC 2020 • Andreas Liesenfeld, Gábor Parti, Yu-Yin Hsu, Chu-Ren Huang
We explore differences in language use and turn-taking dynamics and identify a range of characteristics that set the categories apart.
no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Giulia Rambelli, Emmanuele Chersoni, Alessandro Lenci, Philippe Blache, Chu-Ren Huang
In linguistics and cognitive science, Logical metonymies are defined as type clashes between an event-selecting verb and an entity-denoting noun (e. g.
no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Rong Xiang, Mingyu Wan, Qi Su, Chu-Ren Huang, Qin Lu
Mandarin Alphabetical Word (MAW) is one indispensable component of Modern Chinese that demonstrates unique code-mixing idiosyncrasies influenced by language exchanges.
no code implementations • Joint Conference on Lexical and Computational Semantics 2020 • Emmanuele Chersoni, Rong Xiang, Qin Lu, Chu-Ren Huang
Our experiments focused on crosslingual word embeddings, in order to predict modality association scores by training on a high-resource language and testing on a low-resource one.
no code implementations • WS 2020 • Mingyu WAN, Kathleen Ahrens, Emmanuele Chersoni, Menghan Jiang, Qi Su, Rong Xiang, Chu-Ren Huang
This paper reports a linguistically-enriched method of detecting token-level metaphors for the second shared task on Metaphor Detection.
no code implementations • LREC 2020 • Rong Xiang, Yunfei Long, Mingyu Wan, Jinghang Gu, Qin Lu, Chu-Ren Huang
Deep neural network models have played a critical role in sentiment analysis with promising results in the recent decade.
no code implementations • LREC 2020 • Emmanuele Chersoni, Ludovica Pannitto, Enrico Santus, Aless Lenci, ro, Chu-Ren Huang
While neural embeddings represent a popular choice for word representation in a wide variety of NLP tasks, their usage for thematic fit modeling has been limited, as they have been reported to lag behind syntax-based count models.
no code implementations • LREC 2020 • Rong Xiang, Xuefeng Gao, Yunfei Long, Anran Li, Emmanuele Chersoni, Qin Lu, Chu-Ren Huang
Automatic Chinese irony detection is a challenging task, and it has a strong impact on linguistic research.
no code implementations • WS 2019 • Giulia Rambelli, Emmanuele Chersoni, Philippe Blache, Chu-Ren Huang, Aless Lenci, ro
In this paper, we propose a new type of semantic representation of Construction Grammar that combines constructions with the vector representations used in Distributional Semantics.
no code implementations • WS 2019 • Mingyu Wan, Rong Xiang, Emmanuele Chersoni, Natalia Klyueva, Kathleen Ahrens, Bin Miao, David Broadstock, Jian Kang, Amos Yung, Chu-Ren Huang
no code implementations • 17 Jun 2019 • Emmanuele Chersoni, Enrico Santus, Ludovica Pannitto, Alessandro Lenci, Philippe Blache, Chu-Ren Huang
In this paper, we propose a Structured Distributional Model (SDM) that combines word embeddings with formal semantics and is based on the assumption that sentences represent events and situations.
no code implementations • WS 2019 • Marcos Zampieri, Shervin Malmasi, Yves Scherrer, Tanja Samard{\v{z}}i{\'c}, Francis Tyers, Miikka Silfverberg, Natalia Klyueva, Tung-Le Pan, Chu-Ren Huang, Radu Tudor Ionescu, Andrei M. Butnaru, Tommi Jauhiainen
In this paper, we present the findings of the Third VarDial Evaluation Campaign organized as part of the sixth edition of the workshop on Natural Language Processing (NLP) for Similar Languages, Varieties and Dialects (VarDial), co-located with NAACL 2019.
no code implementations • 21 May 2019 • Chu-Ren Huang, Ting-Shuo Yo, Petr Simon, Shu-Kai Hsieh
Both experiments support the claim that the WBD model is a realistic model for Chinese word segmentation as it can be easily adapted for new variants with the robust result.
no code implementations • WS 2018 • Yunfei Long, Mingyu Ma, Qin Lu, Rong Xiang, Chu-Ren Huang
In this work, we propose a dual user and product memory network (DUPMN) model to learn user profiles and product reviews using separate memory networks.
Ranked #6 on Sentiment Analysis on User and product information
no code implementations • IJCNLP 2017 • Yunfei Long, Qin Lu, Rong Xiang, Minglei Li, Chu-Ren Huang
This paper proposes a novel method to incorporate speaker profiles into an attention based LSTM model for fake news detection.
no code implementations • EMNLP 2017 • Yunfei Long, Qin Lu, Rong Xiang, Minglei Li, Chu-Ren Huang
Evaluations show the CBA based method outperforms the state-of-the-art local context based attention methods significantly.
no code implementations • CONLL 2017 • I-Hsuan Chen, Yunfei Long, Qin Lu, Chu-Ren Huang
We propose a set of syntactic conditions crucial to event structures to improve the model based on the classification of radical groups.
no code implementations • WS 2016 • Ge Xu, Xiaoyan Yang, Chu-Ren Huang
Many NLP tasks involve sentence-level annotation yet the relevant information is not encoded at sentence level but at some relevant parts of the sentence.
no code implementations • PACLIC 2016 • Enrico Santus, Emmanuele Chersoni, Alessandro Lenci, Chu-Ren Huang, Philippe Blache
In Distributional Semantic Models (DSMs), Vector Cosine is widely used to estimate similarity between word vectors, although this measure was noticed to suffer from several shortcomings.
no code implementations • EMNLP 2016 • Emmanuele Chersoni, Enrico Santus, Alessandro Lenci, Philippe Blache, Chu-Ren Huang
Several studies on sentence processing suggest that the mental lexicon keeps track of the mutual expectations between words.
1 code implementation • LREC 2016 • Karl Neergaard, Hongzhi Xu, Chu-Ren Huang
In the design of controlled experiments with language stimuli, researchers from psycholinguistic, neurolinguistic, and related fields, require language resources that isolate variables known to affect language processing.
no code implementations • LREC 2016 • Liu Hongchao, Karl Neergaard, Enrico Santus, Chu-Ren Huang
In the 360 word relation pairs, there are 373 relata.
no code implementations • LREC 2016 • Francesca Strik Lievers, Chu-Ren Huang
Synaesthesia is a type of metaphor associating linguistic expressions that refer to two different sensory modalities.
no code implementations • 30 Mar 2016 • Enrico Santus, Tin-Shing Chiu, Qin Lu, Alessandro Lenci, Chu-Ren Huang
In this paper, we claim that vector cosine, which is generally considered among the most efficient unsupervised measures for identifying word similarity in Vector Space Models, can be outperformed by an unsupervised measure that calculates the extent of the intersection among the most mutually dependent contexts of the target words.
no code implementations • 29 Mar 2016 • Enrico Santus, Tin-Shing Chiu, Qin Lu, Alessandro Lenci, Chu-Ren Huang
In this paper, we describe ROOT13, a supervised system for the classification of hypernyms, co-hyponyms and random words.
no code implementations • LREC 2016 • Enrico Santus, Tin-Shing Chiu, Qin Lu, Alessandro Lenci, Chu-Ren Huang
In this paper, we claim that Vector Cosine, which is generally considered one of the most efficient unsupervised measures for identifying word similarity in Vector Space Models, can be outperformed by a completely unsupervised measure that evaluates the extent of the intersection among the most associated contexts of two target words, weighting such intersection according to the rank of the shared contexts in the dependency ranked lists.
1 code implementation • LREC 2016 • Enrico Santus, Alessandro Lenci, Tin-Shing Chiu, Qin Lu, Chu-Ren Huang
When the classification is binary, ROOT9 achieves the following results against the baseline: hypernyms-co-hyponyms 95. 7% vs. 69. 8%, hypernyms-random 91. 8% vs. 64. 1% and co-hyponyms-random 97. 8% vs. 79. 4%.
no code implementations • LREC 2014 • Sophia Lee, Shoushan Li, Chu-Ren Huang
This paper presents the development of a Chinese event-based emotion corpus.
no code implementations • 13 Feb 2014 • Jia-Fei Hong, Kathleen Ahrens, Chu-Ren Huang
Module-Attribute Representation of Verbal Semantics (MARVS) is a theory of the representation of verbal semantics that is based on Mandarin Chinese data (Huang et al. 2000).
no code implementations • LREC 2012 • Hongzhi Xu, Helen Kai-yun Chen, Chu-Ren Huang, Qin Lu, Dingxu Shi, Tin-Shing Chiu
We adopt the corpus-informed approach to example sentence selections for the construction of a reference grammar.